Sep 14, 2025

Foundation

Eduard Kapelko

This document presents the conceptual framework for establishing an organization named the "Foundation," whose primary goal is to mitigate the existential and systemic risks associated with advanced artificial intelligence. The text outlines a structure designed to ensure the safe and collaborative international development of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), thereby preventing threats such as the realization of the "Vulnerable World Hypothesis," the gradual disempowerment of humanity, and a destabilizing AI arms race. To achieve these goals, a governance model is proposed based on the principles of a narrow mandate, radical financial transparency, rotation of power, and the use of liquid democracy tools in a DAO format. A key security element is a multi-layered internal access control system, consisting of independent teams, which ensures a comprehensive audit of the models and the audit process itself. The document also describes an economic model that transforms the competitive race into a collaborative effort by pooling resources and creating a network effect that incentivizes participants to join the Foundation.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Key strengths of the approach

1) Well-structured and well-fleshed out overall(although a fair bit longer than the 5 page limit!)

2) Strong thinking on the problems and limitations of Red Team and Purple Team models.

3) Clear effort to design transparent structures, set proper objectives, and define roles for each team.

4) Good attention to structural control and incentivization mechanisms, which shows thoughtful design.

Specific areas for improvement

1) The project does not directly engage with CBRN risks or explain how it addresses a specific CBRN threat.It also risks being perceived as “just another body to govern AI,” echoing the mythical idea of a single international AI body.

2) Misses an upfront framing of how the DAO could specifically mitigate CBRN risks and how this adds value compared to existing paradigms.

Suggestions for how to develop this into a stronger project

Reframe the DAO explicitly around CBRN functions, for example, highlight that one of its many functions could be to mitigate CBRN risks and provide specificity: what does DAO-based governance look like when facing a biothreat, radiological incident, or chemical accident?

Potential next steps or future directions

1) Develop an example case e.g., DAO’s role during an accidental pathogen release, radiological material smuggling).

2) Explore integration with existing international CBRN regimes (e.g., OPCW, IAEA) rather than aiming to reinvent a broad global AI body which is very tricky. If looking to do a bottom-up approach then try to show how the incentives for the AI companies would be to back this or take part in it(an example could be maybe something like the Frontier Model Forum).

Cite this work

@misc {

title={

(HckPrj) Foundation

},

author={

Eduard Kapelko

},

date={

9/14/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.