Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

Chengheng Li Chen, Kyuhee Kim

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

There's a problem with the headline number that the paper doesn't address. The "phase transition at 50% compliance" sounds dramatic, but 50% is also what you get mechanically when you hand out 10 permits to 20 agents. Half the firms have permits and are compliant by default. So how much of that transition is deterrence and how much is just... math? Running the same sweep with different agent-to-permit ratios (say, 20 agents / 5 permits, or 20 agents / 15 permits) would answer this pretty quickly, and the fact that it wasn't done is a miss.

The model is also really small: 20 agents, 10 time steps. That's barely enough to see anything interesting happen. You can't get reputation effects, coalition formation, or escalation dynamics in 10 rounds. The authors clearly know this because their future work section reads like a better version of the paper (dollar-denominated fines, dynamic reputation, compute thresholds). That's a bit of a red flag when the to-do list is more compelling than the results.

Credit where it's due: the heatmap is clear, the code is open-source, and the security considerations section honestly grapples with the dual-use problem. The bones of a good project are here; it just needed more time in the oven.

Really nice idea, and impressive progress towards developing it in the timespan of a hackathon.

Cite this work

@misc {

title={

(HckPrj) Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

},

author={

Chengheng Li Chen, Kyuhee Kim

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.