Jan 12, 2026

VexReinforce

Cyan Ding, Brandon Qi, Prakrat Agrawal

Vices like evilness, hallucination, and sycophancy are known failures of large language models (LLMs). Fine tuning LLMs can create emergent misalignment and further amplify these behaviors. Persona vectors are a novel and scalable technique capable of large language models away from such undesirable behaviors, yet they previously remained only demonstrated in research. In this research, we showcase VexReinforce, a production-ready, end to end pipeline that utilizes persona vectors to ground AI systems at scale, from dataset filtration to inoculation in training to inference-time steering.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

A lot of interesting directions are explored here and it's a good idea to explore the potential of persona vectors widely. Main suggestion: Pick one direction and go deep. Test on frontier models, clearly explain what's already known vs. what's novel, and really dig into any limitations. .

A few specific notes: The hallucination vector results (3.1) show overlapping distributions and 60% accuracy, which is statistically distinguishable but not a practical detection method. The steering-infused finetuning analysis (3.4) looks interesting to me, and I appreciated the honest presentation with distributions and error bars. That alone could be a full project worth developing further and looking, for example, into what limits the training effect and how to increase it.

This is a really solid end-to-end attempt at moving activation engineering from just chat steering into a full MLOps pipeline. The Dataset Screening module is a standout contribution. The ability to use vector projection to detect hallucinations and sycophantic tendencies in the MedHallu benchmark with high statistical significance demonstrates a viable, scalable path for automated data hygiene.

The engineering rigor is sharp, especially the decision to use a Humor control vector to validate the extraction process. To make the Training-Time Inoculation even more production ready for mitigating manipulative traits, it might be worth exploring Layer Sweeps to find the optimal intervention point rather than just one layer. Moving from the basic (Difference in Means) method to Sparse Autoencoders for cleaner features could also help reduce the risk of the steering accidentally damaging the model’s general reasoning.

Overall, this is a standout implementation project that builds a functional tool instead of just a research script. Great work.

Cite this work

@misc {

title={

(HckPrj) VexReinforce

},

author={

Cyan Ding, Brandon Qi, Prakrat Agrawal

},

date={

1/12/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.