Feb 2, 2026

Verification Mechanism Feasibility Scorer (VMFS)

Alexandra Moraru, Erik Leklem, Jayani Srinivasan, Valeriia Povergo, Yatharth Maheshwari, Moneera Yassien

A decision-support framework and dashboard that scores AI verification mechanisms across feasibility dimensions to help policy makers, diplomats, technical AI governance, and related stakeholders design pragmatic, layered treaties for global AI risks.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

i'm aligned with the strategic outlook of this, but I'm not persuaded by the submitted work that the _content of the evaluations_ is particularly principled or careful, so i wonder if the project would've made more sense as saying "this is just the platform/dashboard prototype with toy data / lorem ipsum for evaluations, the marketplace would have to supply those judgments/estimates later". It's valuable to kind of shape the elicitation of those judgments, which is what I like, but i'm mildly docking points because I think it would've been harder and more interesting to zero in on the criteria, even just of ONE verification technique and ONE of the VMFS dimensions, with more detail and created a very principled way of assigning a score. Still quite positive impression of this project, overall! and the writeup was very clear and honest about limitations which I appreciated.

Overall, I think the project has some value for exposing policymakers or researchers to a variety of verification mechanisms. The interaction in the live app is playful, with a few annoyances like

not being able to build a portfolio with more than three elements.

However, policymakers and those who advise them will need to build policy portfolios that are suited to addressing particular risks, with participation by particular actors, and this may introduce more details and important context than can be covered by the scores provided. In other words, building a solution will require careful consideration of the particulars, which I’m not sure naturally emerges from the scores or portfolio calculus. I would have appreciated seeing more of the thought that went into scores, and was surprised that this wasn’t a rich table in the appendix, or that I can’t find this within the app. This is content which can also serve policymakers and strategists and the best result would be using the playful interaction of the app to allow these users to dig deep into details. A final opportunity would be to also link out to sources and further reading which drive the numerical scores.

Cite this work

@misc {

title={

(HckPrj) Verification Mechanism Feasibility Scorer (VMFS)

},

author={

Alexandra Moraru, Erik Leklem, Jayani Srinivasan, Valeriia Povergo, Yatharth Maheshwari, Moneera Yassien

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.