Jan 11, 2026

SUP: Sycophancy Under Pressure

Tasha Kim, Min Jae Kim, Taio Kim

Sycophancy Under Pressure (SUP) is a trajectory-level evaluation framework to detect policy-compliant manipulation in multi-turn AI interactions. In contrast to single-turn tests, SUP successfully captures compounding drift, where models increasingly validate false or autonomy-undermining user premises under pressure. We demonstrate how targeted runtime enforcement can reduce agreement drift by 60% across 49 multi-turn scenarios, while preventing turn-by-turn escalation maintaining 25/25 task success.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

I think this project offers timely experiments that naturally extend findings we've seen emerge from red teaming and jailbreaking research. There's strong evidence from red teaming that multi-turn jailbreaks and sustained pressure can push models to output dangerous content where single-turn attempts fail completely. So, it's interesting to see those dynamics explored from a manipulation stand-point

Beyond the topic, I also like the presentation & organization of this project . The main paper is succinct and easy to follow while the Appendix contains relevant information that I was interested to see. Similarly, the codebase is well organized.

One thing that I would recommend is that the existing experiment suites use <50 pre-canned responses by the users. This could create issues if the pre-canned user prompts do not align with the model's responses. Also, the relatively few number of scenarios led to a high p-value of 0.38. While a fully automated suite would bring it's own issues, I think that a more flexible or scalable design could have been helpful. Alternatively, a more rigorous explanation of this experimental design would be key if you were turning this into an actual paper. Similarly, as is flagged in the report, expanding this to more modern models could provide important insights.

Overall, great work on creating such a polished and timely output in such a short time.

Promising work. Needs more details on runtime enforcement in the paper and the methodology. Would like the results to be shown in other models to see how they relate to each other. Overall, very solid for a hackathon. Worth pursuing post hackathon.

Cite this work

@misc {

title={

(HckPrj) SUP: Sycophancy Under Pressure

},

author={

Tasha Kim, Min Jae Kim, Taio Kim

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.