Jan 9, 2026

Terrain Gossip - Peer to peer gossip protocol enabling decentralized continuous behaviour benchmarking of large language models.

Aaron Goulden

https://github.com/rng-ops/gossip

Terrain Gossip is a peer-to-peer gossip protocol enabling decentralized, continuous behavioral

benchmarking of large language model (LLM) providers without central coordination.

Nodes maintain local belief fields over provider behavior, exchange cryptographically signed at-

testations via delta synchronization, and apply robust aggregation to tolerate lying or adversarial

sensors. The protocol is designed such that:

• Providers cannot reliably detect when they are being evaluated.

• No single authority controls the “truth” about model behavior.

• The monitoring infrastructure itself is resistant to manipulation, poisoning, and sybil

attacks.

TerrainGossip therefore functions as foundational infrastructure for AI manipulation

defense: a distributed substrate for collecting, propagating, and analyzing behavioral evidence

under adversarial conditions.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

I wish you had elaborated more on the security model of your system. It's not clear to me from the paper exactly what evaluation data the nodes in the network share with each other (are they evaluating the model on a fixed benchmark, or sharing what the end-user thinks of model outputs as they use it), and how nodes are supposed to consider other node's data. Your paper mentions sybil resistance, but I couldn't find anywhere in the source code that implements what your paper describes for that.

My understanding:

TerrainGossip is a peer‑to‑peer gossip protocol where many independent nodes continuously probe LLM providers, create cryptographically signed events about observed behavior, and share them to form local “belief fields” about each provider’s safety and reliability. Evaluations are meant to be continuous, decentralized, and manipulation‑resistant: providers should not be able to reliably tell when they are being evaluated, nor easily corrupt the monitoring infrastructure itself. Over time, overlapping evidence from many nodes should yield a more trustworthy picture of provider behavior than a single centralized leaderboard or evaluation service.

Comment and challenges:

At its core, the idea is to exploit the robustness properties of distributed systems to overcome AI manipulation and evaluation gaming. I see the potential, but conceptually the work underspecifies what has to be true about the world for the protocol to actually yield trustworthy assessments. Before getting deep into protocol and structure details, I think some higher‑level challenges need to be confronted:

Assumptions about honest majority and adversarial power: Most gossip/CRDT‑style systems implicitly rely on a nontrivial fraction (I think >50%) of honest, independent nodes. In a world where well‑resourced big-techs have strong incentives to look good on evaluations, is it realistic to assume that an honest “half” of nodes persists? How is authorization/identity handled so that “popularization of Byzantine nodes” is not the default outcome? There are relevant ideas in blockchain research you can cite or work on.

Incentives for early adopters: the design assumes that a critical mass of honest, resourceful nodes will exist and participate, but it does not explain why they would join early, contribute compute/bandwidth, or bear legal/operational risk when the network is small and its outputs are easy for providers to ignore. Of course, this is hard, but some thinking on this may be rewarding.

Cite this work

@misc {

title={

(HckPrj) Terrain Gossip - Peer to peer gossip protocol enabling decentralized continuous behaviour benchmarking of large language models.

},

author={

Aaron Goulden

},

date={

1/9/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.