Feb 2, 2026

Attested Multi-Agent Conversation Logs: A Tamper-Evident Black Box for AI Governance

Anantha Shakthi Ganeshan Thevar, Publius Dirac

As multi-agent AI systems assume high-stakes responsibilities, governance frameworks such as the EU AI Act demand traceable, auditable records of agent interactions. We introduce Attested Logs, an open-source Python library that serves as a tamper-evident "black box" for AI conversations. Each message is cryptographically signed (Ed25519), hash-chained (SHA-256), and anchored to trusted public keys, enabling fully offline verification of integrity, authenticity, and ordering. Inspired by aviation flight recorders and C2PA content provenance, the library supports incident forensics, regulatory compliance, and cross-organizational trust. Integrations for AutoGen (manual logging) and LangGraph (callback-based auditing) are provided, together with runnable demos demonstrating end-to-end signing, verification, and tamper detection in real LLM conversations, and 20+ passing tests covering core, crypto, chain, and verification layers.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

This is the most shippable project here. It actually delivers a real library that people could use tomorrow: signed messages, hash chaining, offline verification, integrations, tests. For a hackathon, that’s exceptional execution.

Novelty is moderate because the crypto pieces are standard. The value is that you packaged it cleanly and made it usable in real agent frameworks. This is exactly the kind of infrastructure that improves auditability in deployed systems.

To go further: key management (rotation / revocation), what happens when an agent is compromised, and scaling / storage patterns. But as a sprint project: this is very strong.

Tamper-proof evidence of multi-agent interactions is important for future digital forensics and AI governance and regulation. So the project definitely explores an important problem. The write-up is clear on the purely technical side as well as on some of the related work. And the code artifact seems solid.

To make this a much stronger contribution: Explore one clearly defined actual threat model in depth. Red-team your approach and figure out what the hard part actually is. More in-depth exploration would probably reveal that the trust chain bottoming out at the model provider is a fundamental limitation of the approach your describing. Put yourself in the shoes of an adversary (eg a model provider who wants to alter the logs): What attacks can they run and how does your approach fare?

Cite this work

@misc {

title={

(HckPrj) Attested Multi-Agent Conversation Logs: A Tamper-Evident Black Box for AI Governance

},

author={

Anantha Shakthi Ganeshan Thevar, Publius Dirac

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.