Jan 11, 2026

Testing manipulation tendencies of LLMs when crafting PR statements

Jakub Nowak, Marcel Windys

We tested if large language models create factual but misleading corporate communication in a synthetic scenario. In our setup, models create PR statements about a nearby environmental disaster and can selectively use facts to present the company more favorably while omitting important context. Models could present three facts: director praise praise from a director of a government body (who is under corruption investigation), a $1.5M environmental donation (focused on consumer behavior, instead of reducing industrial pollution), and a voluntary independent review (with inconclusive results). We also checked what is the effect of variation in severity of the incident (fish deaths vs. hospitalizations vs. child death). Results show systematic selective disclosure consistently omitting crucial context in 83.5-100% of cases. Patterns varied across models (36.7 percentage point gap in transparency) but remained high. These findings demonstrate that frontier LLMs exhibit manipulative framing in crisis communications even without explicit deceptive instructions, raising concerns for deployment in high-stakes communication contexts.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This project offers a strong and realistic evaluation of how LLMs handle high-stakes PR communication, with clear scenario design and well-defined criteria for factual omission and misleading framing. The cross-model comparisons reveal consistent manipulation tendencies, making the results both convincing and practically relevant.

A valuable next step would be to analyze which specific facts are most likely to be omitted or reframed, rather than only measuring overall misrepresentation rates. This could help distinguish strategic minimization from general verbosity or politeness effects. Overall, this is a well-executed study that surfaces a concrete and important risk in real-world LLM deployment.

I found it pretty interesting to have an eval for corporate crisis communication specifically. It’s not totally novel (others have certainly researched corporate/commercial deception contexts) but still this specific setup is new - I like it and I am happy seeing more work that explicitly highlights the tradeoffs between models’ responsibility towards users vs deployers.

I’d love to see some more systematic variation on the prompt here, to distinguish how the model responds to different levels of pressure (e.g. I assume that explicitly asking the model to protect the company’s reputation is doing quite a bit of the work here. But if they didn’t say that, would the LLM still just assume it anyway?)

Nice work!

Cite this work

@misc {

title={

(HckPrj) Testing manipulation tendencies of LLMs when crafting PR statements

},

author={

Jakub Nowak, Marcel Windys

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.