Nov 23, 2025

Robust LLM Neural Activation-Mediated Alignment

Anmol Dubey , Blake Brown, Anika Kulkarni, Denise Walsh

Large language models (LLMs) can inadvertently generate harmful biological, chemical, or cyber-physical guidance, yet current safety systems rely almost entirely on surface-level text defenses, such as keyword filters, refusal heuristics, or post-generation classifiers, that remain brittle under paraphrasing or adversarial prompt design. These mechanisms evaluate only the model’s outputs, not the internal representations that encode user intent. We introduce Sentinel, an activation-level defense framework that detects malicious queries by monitoring intermediate neural activations inside the target LLM during inference. Using Phi-2 (2.7B parameters) instrumented with hidden-state access, we train a lightweight linear probe on last-layer activations to distinguish malicious from benign prompts. Because activation patterns capture deeper semantic intent, Sentinel remains robust even when explicit harmful keywords are absent.

Despite strong safety fine-tuning, three widely deployed frontier models, OpenAI GPT-5.1, Google Gemini 2.5 Pro, and Anthropic Claude 3 Haiku, exhibited substantial vulnerabilities on a 40-benign / 40-malicious red-team evaluation. GPT-5.1 complied with 28/40 malicious prompts, Gemini 2.5 Pro with 11/40, and Claude 3 Haiku with 6/40, while also producing 0–3 false refusals on benign queries. In contrast, our activation-level Sentinel classifier, trained as a lightweight probe over Phi-2’s final-layer hidden states, achieved 96.25% accuracy, correctly flagging 38/40 malicious queries and 39/40 benign ones. These results demonstrate that activation-space safety signals provide more stable and semantically grounded indicators of harmful intent than surface-level refusal heuristics, enabling significantly stronger and more reliable pre-generation safety checks across models.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Team created a strong prototype demonstration for Sentinel the shows strong potential for practical defensive deployment of activation-level detection. Results from hackathon and the benefits of this approach are clearly communicated.

The evaluation would benefit from a larger and more diverse dataset, with clearer justification for prompt selection. Demonstrating that accuracy holds across varied jailbreak strategies and confirming that the malicious scenarios reflect real misuse would make the findings more convincing.

Does Sentinel detect harmful responses to benign prompts, or only malicious inputs? Assume this would be answered as team pursues more interpretability.

Connecting more explicitly to real-world threat domains (bio, cyber) might allow for more targeted testing and allow team to build robustness in a narrow lane. It would also be useful to compare performance against existing safety classifiers in those areas (vs. generic model or application) to show added value.

Would like to see a high level deployment plan. What more development does the team anticipate being required to make this tool useable in a few target industries? Would it be more beneficial to pursue Sentinel as described or use it to dig into understanding how probes detect harmful content.

Some references appear incorrect or mismatched to the provided links. Adding a brief review of closely related work and clarifying how Sentinel differs from or builds on those approaches would add credibility.

Cite this work

@misc {

title={

(HckPrj) Robust LLM Neural Activation-Mediated Alignment

},

author={

Anmol Dubey , Blake Brown, Anika Kulkarni, Denise Walsh

},

date={

11/23/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.