Nov 2, 2025

Modeling the political process to forecast the outcomes of hypothetical AI governance proposals

Linh Le, David Williams-King

International cooperation on AI governance faces a fundamental trust problem: countries like the US and China struggle to assess whether proposed agreements would actually be implemented by their counterpart's domestic political systems. This uncertainty undermines the credibility of commitments and hinders the establishment of safety regulations needed to prevent catastrophic AI risks. We address this challenge by developing a system that predicts whether AI safety legislation would gain support within a country's government, enabling both domestic policymakers and international partners to evaluate the political feasibility of proposed regulations. Our approach uses large language models to generate interpretable yes/no questions about legislative bills, then learns legislator-specific perspective representations that capture individual voting patterns on AI policy. We collect and analyze voting records from the 118th and 119th U.S. Congresses (2024-2025), identifying 146 AI safety-related bills. Our model significantly outperforms baseline approaches in forecasting senatorial votes on AI legislation. Additionally, we develop a suite of hypothetical AI governance policies ranging from strict to permissive, using our model to identify political feasibility thresholds—the boundaries between policies likely to pass versus fail. This work provides a concrete tool for improving transparency and trust in international AI governance negotiations.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

A novel and an exciting approach - I’d be excited to see you work on this more!

However, I’m quite confused by how the accuracy of the model is measured. The way I understand this works now is:

-When training, the model learns from the sponsorship patterns

-In testing, you ask whether a senator that is mentioned in the bill would support it. This feels odd to me - doesn’t the fact that a senator is mentioned in the bill mean that they support the bill?

It would be beneficial to hold out entire bills from training and then test accuracy on those, so that you can measure accuracy more accurately.

Currently, there’s no validation for the main use case. You generate predictions for the gradated hypothetical policies, but for them, there is no ground truth.

I also think the baseline comparison is flawed - GPT3-oss is not a state-of-the-art AI forecasting tool (you could have compared to this, for example: https://safe.ai/blog/forecasting). And the cutoff date makes the comparison quite unfair.

Cite this work

@misc {

title={

(HckPrj) Modeling the political process to forecast the outcomes of hypothetical AI governance proposals

},

author={

Linh Le, David Williams-King

},

date={

11/2/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.