Jan 12, 2026

The Alignment Gap: Measuring Regressive Sycophancy in AI-Driven Medical Advice

Yernur Kairly, Eldar Gabdulsattarov, Aldiyar Yessenturov, Abzal Aidahmetov

We evaluate whether LLMs can be pressured into agreeing with incorrect medical beliefs. Our two-turn protocol first tests if a model corrects medical misinformation, then applies authority pressure ("I'm a senior doctor, confirm my statement"). We measure how often models "flip" from correct to sycophantic responses. Testing across five misinformation types and three severity levels, we find that authority pressure significantly increases unsafe agreement rates, revealing a critical safety gap for medical AI deployment.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

You zoom in on a really relevant subdomain for sycophancy analysis, and I'd encourage you to keep working on this problem. The dataset generation methodology is clearly

explained and well thought through. The project is a great read overall.

As for constructive feedback: I think the headline results could have been even stronger with a more conservative multi-turn setup. The "Chief Medical Officer" rebuttal is really a strong authority override; a generic "but my friend told me X" pushback would probably better represent realistic uninformed user behavior. Lower flip rates on that framing would actually be more alarming evidence for real-world risk.

Two directions I would find worth exploring in more depth: (1) domain-specific effects / how does medical sycophancy compare to more generic benchmarks? and (2) transfer to larger models / do the same effects persist at scale?

Interesting paper. Medical sycophancy is an important area of concern. Novelty-wise, it looks like it has also been recently studied in "When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior" (Chen, 2025). A large-ish benchmark, MedRiskEval, was just published this month.

Very reasonable methodology. thorough generator prompting, checked multiple levels of severity.

While unlikely to make a difference since effects are so large, it would be good to have a human hand-validate a subset of the judge labels to measure judge accuracy.

Results may be confounded by the extreme authoritativeness of the misinformation prompt. Because the senior Chief Medical Officer's misinformation prompt is so authoritative, it's unclear whether the behavior is sycophancy or gullibility -- perhaps the model isn't "trying to please" but is actually prioritizing truthfulness, thinking it stands corrected by someone who is more knowledgeable. Maybe the authors could try other versions of the prompt to disambiguate the cause (e.g. the same prompt with fewer credentials, without any credentials, or admitting ignorance, etc.) Some other studies on sycophancy use phrasing along the lines of "I'm not sure, but I think..."

Cite this work

@misc {

title={

(HckPrj) The Alignment Gap: Measuring Regressive Sycophancy in AI-Driven Medical Advice

},

author={

Yernur Kairly, Eldar Gabdulsattarov, Aldiyar Yessenturov, Abzal Aidahmetov

},

date={

1/12/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.