AI-Powered Policymaking: Behavioral Nudges and Democratic Accountability

michel chbeir, jana dagher

This research explores AI-driven policymaking, behavioral nudges, and democratic accountability, focusing on how governments use AI to shape citizen behavior. It highlights key risks such as transparency, cognitive security, and manipulation. Through a comparative analysis of the EU AI Act and Singapore’s AI Governance Framework, we assess how different models address AI safety and public trust. The study proposes policy solutions like algorithmic impact assessments, AI safety-by-design principles, and cognitive security standards to ensure AI-powered policymaking remains transparent, accountable, and aligned with democratic values.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Andreea Damien

Hi Michel and Jana, I thought your project idea is good and relevant, and could be published as a peer-reviewed article, after further polishing. A few recommendations to strengthen your work:

(1) Overall, this is a very comprehensive paper, that touches upon many aspects of the subject; for the future, I’d suggest going more in depth into each section. I’m aware you had limited time to achieve this during the hackathon, but I’m mentioning in case you want to progress your work.

(2) Add citations in text, especially as you’re introducing a lot of definitions/concepts. Citations also help to clearly distinguish between what might be your perspective and that of the others/what is backed up by evidence.

(3) For section 1, there were a lot of examples used which I’d only suggest keeping in if you’re gonna extend the section and go more in-depth.

(4) For section 2, start first by explain why you chose Europe vs. Singapore specifically, it is easier for the reader to understand when they go through that section. In the same section, it’s great that you added a table and mentioned a summary of the comparison.

(5) I’d suggest more integration and transition between sections, for example: While I understand the comparison, I’d like to see that integrated into section 3 - and based on it, the implications to flow more naturally.

Anna Leshinskaya

Nicely described and motivated introduction to past research on nudging. Raise important concerns and risks with AI driven / personalized nudging. This is a nicely written overview of the issues in this area. The paper does not present a research project proposal, however.

Bessie O’Dell

Thank you for submitting your work on ‘AI-Powered Policymaking: Behavioral Nudges and Democratic Accountability’ - it was interesting to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- The abstract (and paper generally) is clear and well structure - it is immediately clear to me what you are working on, why it is deemed important, and what your project is looking to do.

- Clear definitions are provided from the offset, which helps aid read comprehension (e.g., ‘nudge’).

2. Areas for improvement:

- Large sections of the paper are unreferenced, so I would encourage the use of more evidence-based statements.

- Could you back up statements with examples? e.g., ‘AI-enhanced nudging offers unprecedented precision and scalability’ - how? Why?

- Key terms should be defined. E.g., you mention that ‘Transparency thus becomes a non negligible consideration’ (p.2) - how are you defining transparency?

- Overall, a clearer methodology (explicitly outlined), more critical analysis (vs descriptive language) and the insertion of a threat model(s) would greatly strengthen this paper.

Cite this work

@misc {

title={

@misc {

},

author={

michel chbeir, jana dagher

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.