Feb 1, 2026
ZK-GovProof: Composable Zero-Knowledge Proofs for AI Governance
Srishti Dutta
ZK-GovProof addresses a critical verification paradox in international AI governance: regulators require data to verify compliance, yet AI laboratories cannot disclose sensitive competitive and security information. This system leverages zero-knowledge cryptography to enable cryptographically verifiable compliance demonstrations without revealing underlying data.
The framework implements three core zkSNARK circuits that verify compute thresholds, safety evaluation completion, and policy adherence while preserving confidentiality of exact metrics, evaluation scores, and internal processes. The project's primary innovation lies in its composable proof architecture, which aggregates multiple compliance requirements into a single efficient proof, reducing verification overhead while ensuring atomic compliance guarantees.
ZK-GovProof directly supports emerging regulatory frameworks including the EU AI Act's compute reporting requirements, Responsible Scaling Policy monitoring, and international treaty verification. The modular architecture enables seamless expansion to accommodate additional regulatory frameworks as they emerge, with each new requirement integrable as an additional circuit component. The system employs Groth16 SNARKs implemented via Circom and SnarkJS, generating succinct proofs (approximately 200 bytes) with sub-second verification times.
Additionally, the project contributes a comprehensive verifiability gap analysis that rigorously delineates which governance claims can be cryptographically proven versus those requiring complementary verification mechanisms. This work establishes foundations for privacy-preserving regulatory infrastructure essential for scalable international AI governance coordination.
Great threat analysis. Building on that, I would like to see more discussion on the counterfactual impact of this work. Given T1 & T2 from the threat analysis section, it's not clear to me that implementing this would be much more reliable and confidential than a traditional report from the lab making equivalent claims (e.g. that a model is below a given compute threshold, or that N evals have been conducted)
Impact & Innovation: 2.5
The problem statement is interesting though I don't think privacy is the primary blocker (rather than other issues e.g. stronger mandated commitments). This solution could be more exciting regarding e.g. US <> China commitments, but here I would worry about input integrity.
Execution quality: 3.5
Sensible methodology. I appreciated the research on speed & scalability. And the threat model seems mature. It would have been interesting to engage more with the question of how this could be implemented
Presentation & Clarity: 4
Well-structured and readable. I particularly like the taxonomy of what ZK can and cannot verify. Text could be slightly tighter.
Cite this work
@misc {
title={
(HckPrj) ZK-GovProof: Composable Zero-Knowledge Proofs for AI Governance
},
author={
Srishti Dutta
},
date={
2/1/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


