Feb 2, 2026

Panopticon

Sanchayan Ghosh

A proof of chain verifier for AI Use, determining if the AI has been tampered with and if so, at what stage.

Also detect anomalous outputs in the LLM, and flag them to the user, organizing the LLM into dangerous, confusing or safe inputs.

Analyze the LLM's activation states to determine LLM's confusion on seeing malicious prompts helping users craft more malicious prompts to analyze the LLM

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The team built an impressive prototype. The core problem is that the system is not evaluated in any real way. My recommendation would be to pick the most promising component, run it against a concrete set of prompts, and show it catches something a simpler baseline doesn't.

The project tries to address hardware provenance, activation-based safety monitoring, and automated red-teaming simultaneously. All of these are important topics but each of them is a major research area with substantial existing literature.

My main suggestion would be to pick one problem and go deep. Try to understand what is known about the problem and where the real gaps lie. For example, for the activation monitoring direction (Layer 2) there exists a lot of highly relevant work on representation engineering and linear probes for safety-relevant features.

The next most important thing is to take evaluation much more seriously: Evaluate whether your method achieves reasonable results and really stress-test your findings. A convincing evaluation of one component is better than a full-stack demo with lacking validation.

Cite this work

@misc {

title={

(HckPrj) Panopticon

},

author={

Sanchayan Ghosh

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.