Nov 23, 2025

Cognitive Canary: Active Defense Against Neural Inference

Tuesday

Cognitive Canary is an active defense system that protects your mind from algorithmic profiling. It uses adversarial machine learning to inject mathematical "camouflage" into your digital footprint, preventing AI models from inferring your cognitive state (stress, focus, intent) from your metadata. In tests against real biometric surveillance models, our system achieved a 96.5% bypass rate, demonstrating that we can use Gradient Starvation to make surveillance economically non-viable.

Privacy shouldn't rely on trust; it should rely on adversarial engineering.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

At a time when people are paying for tools to protect their personal data (data removal services, VPNs, Apple’s anti-tracking controls), I can definitely see a market for something like this.

My main question is whether this is a large enough problem today for users to pay to create an inference gap. The report does not include quantitative data on how common or harmful behavioral inference is in the wild.

This defense assumes adversaries rely primarily on behavioral metadata. If, in practice, attackers combine multiple signals (video, browser fingerprinting, network traffic), would this tool make a huge difference in protecting user data?

Adding artificial cursor motion or jitter could hurt user experience or accessibility. The report notes this risk but does not provide usability testing. Is there any way to create the inference gap without affecting what the user sees and feels?

Finally, I am curious if you see a government, enterprise, or institutional use case here, beyond individual consumers anxious to protect their data.

This project reminds me of tools like Noisy for fake telemetry, AdNauseam for fake ad clicks, TrackMeNot for search queries. For mouse dynamics specifically, there's closely related prior work, "My Mouse, My Rules," which also proposes adversarial perturbations of cursor movements to defeat profiling, and they built the MouseFaker browser extension that implements it.

I think this would work better as a default or system-level protection rather than something users need to install themselves - those tend to see very limited adoption - and that's the main bottleneck for solutions like this. For example, pushing standards/OS/browsers to add built-in randomization or "anti-profiling" modes, or (just an example) poisoning datasets/models at scale, e.g., by creating adversarially crafted inputs (decoy users/sessions) designed to hurt mass profiling accuracy - these ways the defense works at the ecosystem (default) level rather than requiring each person to adopt it individually. Also, modern tracking methods pull from many different behavioral/side-channel signals, so training separate “jammers” for each signal seems costly and hard to scale (given how many different channels/metadata sources/side-channels are in play).

That said, the project does a good job showing both how these attacks work and why we need defenses, and it gives us a working proof-of-concept that could be adapted for future default-level protections. The presentation is great.

Strengths: The threat model is technically coherent. Behavioral metadata can reveal cognitive states, and that metadata isn't legally protected. The adversarial approach is sound: 96.5% bypass is solid adversarial ML work. Execution is strong for hackathon scope.

Suggestions: The "why now" isn't clear. Who is deploying behavioral biometric inference at scale today? We're still fighting for basic private inference and agent identity and access management (IAM). Cognitive state tracking seems several layers beyond where most threat actors operate. The project should be explicit about whether this is anticipatory defense (getting ahead of a future threat) or reactive defense (countering something deployed now). The ZKP module has the same grounding problem: who needs cryptographic verification without revealing raw data, and in what workflow?

POV from a Halcyon Ventures investor: Ground the threat model in concrete adversaries. If this is about state surveillance (China emotion detection), say so and scope accordingly. If it's about future AI agent inference, explain what deployment timeline you're defending against. The technical work is good; it just needs a clearer theory of who's attacking and when. Really impressed by the theoretical grounding, though, and kudos on such hardcore technical chops in a tight turnaround!

Cite this work

@misc {

title={

(HckPrj) Cognitive Canary: Active Defense Against Neural Inference

},

author={

Tuesday

},

date={

11/23/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.