Feb 2, 2026

VeriTrain - Formal Verification for AI Governance Compliance

Tasfia Chowdhury

VeriTrain is a system that lets AI developers generate machine-verifiable proofs that their training and deployment processes followed safety and regulatory rules without revealing code, data, or model details.

It uses a theorem prover, Isabelle/HOL, to check that properties like compute limits, mandatory safety evaluations, and deployment approvals were enforced. An LLM helps create the proofs, but the guarantee comes entirely from the formal verifier.

VeriTrain focuses on process verification rather than model behavior. The proofs can be shared with regulators or international partners to show compliance, making it a practical tool for trustworthy AI governance and international cooperation.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Good topic — as the paper well articulates verification without disclosure is a barrier to all international governance agreements, AI included. The formal verification architecture is sound, and the trace format design is thoughtful, conscious to only log aggregate statistics without risk to proprietary information. The main gap is that the system verifies compliance given the trace data, but cannot verify that the trace data accurately represents the actual training run. The paper's security evaluation acknowledges this, as instrumentation bypass attacks have a 66.7% success rate. The proposed mitigation is TEE attestation, but this would require the hardware infrastructure the paper notes isn't yet deployed at scale, and the paper expresses some wariness to hardware approaches early on. This means the architecture would work well as one layer in a verification stack, but needs to be combined with trusted trace generation (via TEEs or similar) before the "cryptographically unforgeable" framing fully applies. This a valuable component of a formal verification system, but does need many other components to be functional.

The project correctly identifies compute verification as an important unsolved problem for AI governance, and provides a nice overview of some example applications. The proposed formal verification, however, is applied at the wrong layer. Proving "N FLOPs < threshold" is _not_ the hard problem. The hard problem is ensuring the FLOP measurement itself is trustworthy. Since the proposed instrumentation accepts arbitrary inputs, there can be no trust in the outputs either.

My suggestion would be to aggressively cut scope and go deeper. Instead of trying to build a full pipeline end-to-end, it would be more valuable to deeply investigate one specific link in the verification chain. A focused analysis of the trust model would be more impactful than a code base built around a flawed assumption. I would also suggest to be more precise about what was implemented vs. planned and to make the writing much more focused. Zoom in on one crucial problem and go deep first before attempting to scaffold a full system.

Cite this work

@misc {

title={

(HckPrj) VeriTrain - Formal Verification for AI Governance Compliance

},

author={

Tasfia Chowdhury

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.