Paper 2025/1646
Scalable zkSNARKs for Matrix Computations: A Generic Framework for Verifiable Deep Learning
Abstract
Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof size and verification time, and architecture privacy. Hurdles persist because we lack a succinct commitment to the full neural network and a framework for heterogeneous models, leaving verification dependent on architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy. Complexity-wise, for a matrix expression with $M$ atomic operations on $n \times n$ matrices, the prover runs in $O(M n^2)$ time while proof size and verification time are $O(\log(M n))$, outperforming known VML systems. Honed for this framework, we formalize relations directly in matrices or vectors---a more intuitive form for VML than traditional polynomials. Our LiteBullet proof, an inner-product proof built on folding and its connection to sumcheck (Crypto '21), yields a polynomial-free alternative. With these ingredients, we reconcile heterogeneity, zero knowledge, succinctness, and architecture privacy in a single VML system.
Metadata
- Available format(s)
-
PDF
- Category
- Cryptographic protocols
- Publication info
- A major revision of an IACR publication in ASIACRYPT 2025
- Keywords
- zkSNARKMatrixZero-Knowledge Machine Learning
- Contact author(s)
-
mcong @ connect hku hk
smchow @ ie cuhk edu hk
smyiu @ cs hku hk
john tszhonyuen @ monash edu - History
- 2025-09-16: revised
- 2025-09-11: received
- See all versions
- Short URL
- https://ia.cr/2025/1646
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2025/1646, author = {Mingshu Cong and Sherman S. M. Chow and Siu Ming Yiu and Tsz Hon Yuen}, title = {Scalable {zkSNARKs} for Matrix Computations: A Generic Framework for Verifiable Deep Learning}, howpublished = {Cryptology {ePrint} Archive, Paper 2025/1646}, year = {2025}, url = {https://eprint.iacr.org/2025/1646} }