Jan 12, 2026

FV-Deception

Lee Wall

I ran a set of experiments to see if frontier LLM's could intentionally generate formal verification specs that slightly diverge from the natural-language semantic intent, and whether other models could reliably catch this divergence.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Targeting deception in formal specifications is a valuable and neglected direction. The finding that Opus catches fewer of its own bugs than Sonnet is interesting, and I'd like to see whether this replicates with a larger attacker-defender capability matrix. Including concrete examples of deceptive specifications would also help readers interpret what's actually happening.

I am not familiar with the literate on FV, but from what I (+Claude) can tell after digging around there is very little existing work done on this specific problem, and it seems like very important work for the FV field to me. For me this was in shot for a top score on impact if the conclusions were more explicit - what is the so what? What does this really mean and what do we now need to do? It seems like you are very thoughtful about this space, and I was eager to see more of your thinking and takes!

The execution had room for improvement. The sample size is quite small. The methodology mentions 9 combinations of model attacker/defenders, but only the data and conclusions on using Opus as an attacker was discussed. No analysis of successful / unsuccessful attempts at attacks. I found it hard to really have faith in the (important!) conclusions with these limitations.

The things that the report communicates are all clean. You know how to write clearly. Well written and argued. I would have liked to see more detail and coverage of the data and experiments that were done.

I hope you continue with this area of work.

Cite this work

@misc {

title={

(HckPrj) FV-Deception

},

author={

Lee Wall

},

date={

1/12/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.