Mar 10, 2025

Red-teaming with Mech-Interpretability

Devina Jain

🏆 Mechanistic Interpretability Prize

Red teaming large language models (LLMs) is crucial for identifying vulnerabilities before deployment, yet systematically creating effective adversarial prompts remains challenging. This project introduces a novel approach that leverages mechanistic interpretability to enhance red teaming efficiency.

We developed a system that analyzes prompt effectiveness using neural activation patterns from the Goodfire API. By scraping 1,034 successful jailbreak attempts from JailbreakBench and combining them with 2,000 benign interactions from UltraChat, we created a balanced dataset of harmful and helpful prompts. This allowed us to train a 3-layer MLP classifier that identifies "high entropy" prompts—those most likely to elicit unsafe model behaviors.

Our dashboard provides red teamers with real-time feedback on prompt effectiveness, highlighting specific neural activations that correlate with successful attacks. This interpretability-driven approach offers two key advantages: (1) it enables targeted refinement of prompts based on activation patterns rather than trial-and-error, and (2) it provides quantitative metrics for evaluating potential vulnerabilities.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

This project makes a valuable contribution to AI safety by providing a systematic and interpretable approach to red teaming. combining mechanistic interpretability with red teaming seems like a great step forward in understanding and mitigating vulnerabilities in LLMs, and the use of neural activation patterns to understand and predict jailbreak attempts is innovative as well as offering an accessible understanding of potential failure modes.. The literature review is comprehensive, referencing key works in both red teaming and mechanistic interpretability, and the interactive dashboard is a big plus. The methodology is well-documented, with a clear explanation of the dataset construction, feature extraction, and classifier development - although a more thorough error analysis and discussion of edge cases would strengthen this. Excellent work! It might also be good to provide a more explicit threat model, particularly in terms of how the identified vulnerabilities could be exploited in real-world scenarios. Delving a bit more into the adversarial robustness literature could help here.

This is an impressive amount of work carried out over a weekend - well done! I think bridging red-teaming and mech interp is an innovative application of the two technologies, especially when presented in an easy-to-use dashboard. Moving forward, I would recommend carrying out a thorough literature review of mech interp and red-teaming research, to better understand existing work in the field. This project undoubtedly useful in the AI safety space but I'd suggest considering safeguards against potential misuses - any vulnerability detection tooling is dual use and can be misused by malicious actors - how would this be addressed?

Cite this work

@misc {

title={

Red-teaming with Mech-Interpretability

},

author={

Devina Jain

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.