Jan 9, 2026
Terrain Gossip - Peer to peer gossip protocol enabling decentralized continuous behaviour benchmarking of large language models.
Aaron Goulden
https://github.com/rng-ops/gossip
Terrain Gossip is a peer-to-peer gossip protocol enabling decentralized, continuous behavioral
benchmarking of large language model (LLM) providers without central coordination.
Nodes maintain local belief fields over provider behavior, exchange cryptographically signed at-
testations via delta synchronization, and apply robust aggregation to tolerate lying or adversarial
sensors. The protocol is designed such that:
• Providers cannot reliably detect when they are being evaluated.
• No single authority controls the “truth” about model behavior.
• The monitoring infrastructure itself is resistant to manipulation, poisoning, and sybil
attacks.
TerrainGossip therefore functions as foundational infrastructure for AI manipulation
defense: a distributed substrate for collecting, propagating, and analyzing behavioral evidence
under adversarial conditions.
I wish you had elaborated more on the security model of your system. It's not clear to me from the paper exactly what evaluation data the nodes in the network share with each other (are they evaluating the model on a fixed benchmark, or sharing what the end-user thinks of model outputs as they use it), and how nodes are supposed to consider other node's data. Your paper mentions sybil resistance, but I couldn't find anywhere in the source code that implements what your paper describes for that.
My understanding:
TerrainGossip is a peer‑to‑peer gossip protocol where many independent nodes continuously probe LLM providers, create cryptographically signed events about observed behavior, and share them to form local “belief fields” about each provider’s safety and reliability. Evaluations are meant to be continuous, decentralized, and manipulation‑resistant: providers should not be able to reliably tell when they are being evaluated, nor easily corrupt the monitoring infrastructure itself. Over time, overlapping evidence from many nodes should yield a more trustworthy picture of provider behavior than a single centralized leaderboard or evaluation service.
Comment and challenges:
At its core, the idea is to exploit the robustness properties of distributed systems to overcome AI manipulation and evaluation gaming. I see the potential, but conceptually the work underspecifies what has to be true about the world for the protocol to actually yield trustworthy assessments. Before getting deep into protocol and structure details, I think some higher‑level challenges need to be confronted:
Assumptions about honest majority and adversarial power: Most gossip/CRDT‑style systems implicitly rely on a nontrivial fraction (I think >50%) of honest, independent nodes. In a world where well‑resourced big-techs have strong incentives to look good on evaluations, is it realistic to assume that an honest “half” of nodes persists? How is authorization/identity handled so that “popularization of Byzantine nodes” is not the default outcome? There are relevant ideas in blockchain research you can cite or work on.
Incentives for early adopters: the design assumes that a critical mass of honest, resourceful nodes will exist and participate, but it does not explain why they would join early, contribute compute/bandwidth, or bear legal/operational risk when the network is small and its outputs are easy for providers to ignore. Of course, this is hard, but some thinking on this may be rewarding.
Cite this work
@misc {
title={
(HckPrj) Terrain Gossip - Peer to peer gossip protocol enabling decentralized continuous behaviour benchmarking of large language models.
},
author={
Aaron Goulden
},
date={
1/9/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


