Jul 28, 2025

Idempotent GPTs actually may provide robustness by design

Jamilya Erkenova, Sergei Kudriashov

Idempotence is one of the central concepts in quantum physics, corresponding to an operator

that doesn’t change its output being applied twice. Enforcing idempotence in generative deep learning may be interpreted as imposing a constraint on the model to be a projector on the manifold, corresponding to the train-time target distribution, which was explored for image generation models by Shocher et al. 2023. Idempotent test-time training has predicted to be a valuable approach for uncertainty quantification and adaptation to distribution shifts Durasov et al.2025. We find that although language models are iterative refiners of token predictions, they strugle to preserve idempotence. Thus, we train small-scale idempotent GPT model with expected qualities by design and provide proof-of-concept code for evaluations. Development of the project may let us obtain more robust and adaptable models and lower the probability of catastrophic risks

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This project explores idempotent training methods for neural networks, which is an interesting approach that, like energy-based models, may be amenable to physics-inspired analyses. I would have liked to see more specific connections drawn between the methods employed, techniques from physics, and AI safety challenges.

Solid exploration that effectively bridges quantum physics concepts (idempotence in measurements and channels) with AI safety problems like robustness under shifts. The proof-of-concept code and empirical checks on popular llms provide concrete evidence, and the focus on low-probability estimation aligns well with emerging safety concerns. I think the idea is intersting and connects to manifold hypothesis. However, there are methodological issues: The experimental setups are basic and lack depth—e.g. the first setup uses only 100 TriviaQA samples without statistical significance testing or controls for model size/architecture variations. The trained model's evaluation is incomplete (authors note insufficient time for full testing), and feasibility for scaling to larger models is mentioned but not analyzed (e.g., no discussion of computational costs or convergence guarantees beyond ideal assumptions). Overall, it's feasible as a proof-of-concept but lacks important technical details and statistical significance, which makes it hard to convince reader the result still holds for larger datasets/ models.

This submission was obviously rushed, so it's unclear what the original intentions were. Perhaps the goal of the project could be summarized as 'we enforce f(f(x))=f(x) so the network projects any input back onto the data manifold'. Extending past results in this direction from images to text is a reasonable project for a hackathon, but there are major gaps in the work as it was presented. First, the function f doesn't seem to be specified; how is idempotence being measured? It seems like the model should only be idempotent if it already 'knows' the answer, but this also isn't clear. This affects the safety implications, which are poorly motivated. There are some buzzwords thrown in 'uncertainty estimation', 'adversarial robustness', but these are never really explained. The experiments are also without context or explanation, making it hard to judge the technical soundness of the project.

Cite this work

@misc {

title={

(HckPrj) Idempotent GPTs actually may provide robustness by design

},

author={

Jamilya Erkenova, Sergei Kudriashov

},

date={

7/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.