Nov 23, 2025

Robust LLM Neural Activation-Mediated Alignment

Anmol Dubey , Blake Brown, Anika Kulkarni, Denise Walsh

Large language models (LLMs) can inadvertently generate harmful biological, chemical, or cyber-physical guidance, yet current safety systems rely almost entirely on surface-level text defenses, such as keyword filters, refusal heuristics, or post-generation classifiers, that remain brittle under paraphrasing or adversarial prompt design. These mechanisms evaluate only the model’s outputs, not the internal representations that encode user intent. We introduce Sentinel, an activation-level defense framework that detects malicious queries by monitoring intermediate neural activations inside the target LLM during inference. Using Phi-2 (2.7B parameters) instrumented with hidden-state access, we train a lightweight linear probe on last-layer activations to distinguish malicious from benign prompts. Because activation patterns capture deeper semantic intent, Sentinel remains robust even when explicit harmful keywords are absent.

Despite strong safety fine-tuning, three widely deployed frontier models, OpenAI GPT-5.1, Google Gemini 2.5 Pro, and Anthropic Claude 3 Haiku, exhibited substantial vulnerabilities on a 40-benign / 40-malicious red-team evaluation. GPT-5.1 complied with 28/40 malicious prompts, Gemini 2.5 Pro with 11/40, and Claude 3 Haiku with 6/40, while also producing 0–3 false refusals on benign queries. In contrast, our activation-level Sentinel classifier, trained as a lightweight probe over Phi-2’s final-layer hidden states, achieved 96.25% accuracy, correctly flagging 38/40 malicious queries and 39/40 benign ones. These results demonstrate that activation-space safety signals provide more stable and semantically grounded indicators of harmful intent than surface-level refusal heuristics, enabling significantly stronger and more reliable pre-generation safety checks across models.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Team created a strong prototype demonstration for Sentinel the shows strong potential for practical defensive deployment of activation-level detection. Results from hackathon and the benefits of this approach are clearly communicated.

The evaluation would benefit from a larger and more diverse dataset, with clearer justification for prompt selection. Demonstrating that accuracy holds across varied jailbreak strategies and confirming that the malicious scenarios reflect real misuse would make the findings more convincing.

Does Sentinel detect harmful responses to benign prompts, or only malicious inputs? Assume this would be answered as team pursues more interpretability.

Connecting more explicitly to real-world threat domains (bio, cyber) might allow for more targeted testing and allow team to build robustness in a narrow lane. It would also be useful to compare performance against existing safety classifiers in those areas (vs. generic model or application) to show added value.

Would like to see a high level deployment plan. What more development does the team anticipate being required to make this tool useable in a few target industries? Would it be more beneficial to pursue Sentinel as described or use it to dig into understanding how probes detect harmful content.

Some references appear incorrect or mismatched to the provided links. Adding a brief review of closely related work and clarifying how Sentinel differs from or builds on those approaches would add credibility.

Cite this work

@misc {

title={

(HckPrj) Robust LLM Neural Activation-Mediated Alignment

},

author={

Anmol Dubey , Blake Brown, Anika Kulkarni, Denise Walsh

},

date={

11/23/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.