APART RESEARCH

Impactful AI safety research

Explore our projects, publications and pilot experiments

Our Approach

Arrow

Our research focuses on critical research paradigms in AI Safety. We produces foundational research enabling the safe and beneficial development of advanced AI.

Arrow

Safe AI

Publishing rigorous empirical work for safe AI: evaluations, interpretability and more

Novel Approaches

Our research is underpinned by novel approaches focused on neglected topics

Pilot Experiments

Apart Sprints have kickstarted hundreds of pilot experiments in AI Safety

Our Approach

Arrow

Our research focuses on critical research paradigms in AI Safety. We produces foundational research enabling the safe and beneficial development of advanced AI.

Arrow

Safe AI

Publishing rigorous empirical work for safe AI: evaluations, interpretability and more

Novel Approaches

Our research is underpinned by novel approaches focused on neglected topics

Pilot Experiments

Apart Sprints have kickstarted hundreds of pilot experiments in AI Safety

Apart Sprint Pilot Experiments

Apr 15, 2025

CalSandbox and the CLEAR AI Act: A State-Led Vision for Responsible AI Governance

The CLEAR AI Act proposes a balanced, forward-looking framework for AI governance in California that protects public safety without stifling innovation. Centered around CalSandbox, a supervised testbed for high-risk AI experimentation, and a Responsible AI Incentive Program, the Act empowers developers to build ethical systems through both regulatory support and positive reinforcement. Drawing from international models like the EU AI Act and sandbox initiatives in Singapore and the UK, CLEAR AI also integrates public transparency and equitable infrastructure access to ensure inclusive, accountable innovation. Together, these provisions position California as a global leader in safe, values-driven AI development.

Read More

Read More

Apr 15, 2025

California Law for Ethical and Accountable Regulation of Artificial Intelligence (CLEAR AI) Act

The CLEAR AI Act proposes a balanced, forward-looking framework for AI governance in California that protects public safety without stifling innovation. Centered around CalSandbox, a supervised testbed for high-risk AI experimentation, and a Responsible AI Incentive Program, the Act empowers developers to build ethical systems through both regulatory support and positive reinforcement. Drawing from international models like the EU AI Act and sandbox initiatives in Singapore and the UK, CLEAR AI also integrates public transparency and equitable infrastructure access to ensure inclusive, accountable innovation. Together, these provisions position California as a global leader in safe, values-driven AI development.

Read More

Read More

Apr 15, 2025

Recommendation to Establish the California AI Accountability and Redress Act

We recommend that California enact the California AI Accountability and Redress Act (CAARA) to address emerging harms from the widespread deployment of generative AI, particularly large language models (LLMs), in public and commercial systems.

This policy would create the California AI Incident and Risk Registry (CAIRR) under the California Department of Technology to transparently track post-deployment failures. CAIRR would classify model incidents by severity, triggering remedies when a system meets the criteria for an "Unacceptable Failure" (e.g., one Critical incident or three unresolved Moderate incidents).

Critically, CAARA also protects developers and deployers by:

Setting clear thresholds for liability;

Requiring users to prove demonstrable harm and document reasonable safeguards (e.g., disclosures, context-sensitive filtering); and

Establishing a safe harbor for those who self-report and remediate in good faith.

CAARA reduces unchecked harm, limits arbitrary liability, and affirms California’s leadership in AI accountability.

Read More

Read More

Apr 15, 2025

Round 1 Submission

Our round one submission for the UC Berkeley AI Policy Hackathon

Read More

Read More

Apr 14, 2025

FlexHEG Devices to Enable Implementation of AI IntelSat

The project proposes using Flexible Hardware Enabled Governance (FlexHEG) devices to support the IntelSat model for international AI governance. This framework aims to balance technological progress with responsible development by implementing tamper-evident hardware systems that enable reliable monitoring and verification between participating members. Two key policy approaches are outlined: 1) Treasury Department tax credits to incentivize companies to adopt FlexHEG-compliant hardware, and 2) NSF technical assistance grants to help smaller organizations implement these systems, preventing barriers to market entry while ensuring broad participation in the governance framework. The proposal builds on the successful Intelsat model (1964-2001) which balanced US leadership with international participation through weighted voting.

Read More

Read More

Apr 14, 2025

Smart Governance, Safer Innovation: A California AI Sandbox with Guardrails

California is home to global AI innovation, yet it also leads in regulatory experimentation with landmark bills like SB-1047 and SB-53. These laws establish a rigorous compliance regime for developers of powerful frontier models, including mandates for shutdown mechanisms, third-party audits, and whistle-blower protections. While crucial for public safety, these measures may unintentionally sideline academic researchers and startups who cannot meet such thresholds.

To reconcile innovation with public safety, we propose the California AI Sandbox: a public-private test-bed where vetted developers can trial AI systems under tailored regulatory conditions. The sandbox will be aligned with CalCompute infrastructure and enforce key safeguards including third-party audits and SB-53-inspired whistle-blower protection to ensure that flexibility does not come at the cost of safety or ethics.

Read More

Read More

Apr 14, 2025

FlexHEG Devices to Enable Implementation of AI IntelSat

The FlexHEG policy proposal combines Treasury Department tax credits and NSF technical assistance grants to create the hardware-enabled verification infrastructure necessary for implementing an IntelSat-like governance model for AI, ensuring both large corporations and smaller organizations can participate in a framework that balances technological progress with responsible development.

Read More

Read More

Apr 14, 2025

A Call for ‘Green AI’ in California: Evaluation of Existing and Alternative Regulations

The rapid growth of artificial intelligence, especially large-scale foundation models, poses critical environmental challenges. Training compute-heavy AI models consumes massive energy and water, leading to significant carbon emissions. California, a leader in climate and AI policy, doesn't have any legislation explicitly addressing the strain AI development is causing to the resources. To bridge these gaps, we propose bipartisan legislative solutions that ensure AI development in California is environmentally sustainable without stifling innovation.

Read More

Read More

Apr 14, 2025

Data Trusts in AI Governance

Data trusts are an alternative model for data governance that prioritise data subjects and creators in AI development.

Read More

Read More

Apart Sprint Pilot Experiments

Apr 15, 2025

CalSandbox and the CLEAR AI Act: A State-Led Vision for Responsible AI Governance

The CLEAR AI Act proposes a balanced, forward-looking framework for AI governance in California that protects public safety without stifling innovation. Centered around CalSandbox, a supervised testbed for high-risk AI experimentation, and a Responsible AI Incentive Program, the Act empowers developers to build ethical systems through both regulatory support and positive reinforcement. Drawing from international models like the EU AI Act and sandbox initiatives in Singapore and the UK, CLEAR AI also integrates public transparency and equitable infrastructure access to ensure inclusive, accountable innovation. Together, these provisions position California as a global leader in safe, values-driven AI development.

Read More

Apr 15, 2025

California Law for Ethical and Accountable Regulation of Artificial Intelligence (CLEAR AI) Act

The CLEAR AI Act proposes a balanced, forward-looking framework for AI governance in California that protects public safety without stifling innovation. Centered around CalSandbox, a supervised testbed for high-risk AI experimentation, and a Responsible AI Incentive Program, the Act empowers developers to build ethical systems through both regulatory support and positive reinforcement. Drawing from international models like the EU AI Act and sandbox initiatives in Singapore and the UK, CLEAR AI also integrates public transparency and equitable infrastructure access to ensure inclusive, accountable innovation. Together, these provisions position California as a global leader in safe, values-driven AI development.

Read More

Apr 15, 2025

Recommendation to Establish the California AI Accountability and Redress Act

We recommend that California enact the California AI Accountability and Redress Act (CAARA) to address emerging harms from the widespread deployment of generative AI, particularly large language models (LLMs), in public and commercial systems.

This policy would create the California AI Incident and Risk Registry (CAIRR) under the California Department of Technology to transparently track post-deployment failures. CAIRR would classify model incidents by severity, triggering remedies when a system meets the criteria for an "Unacceptable Failure" (e.g., one Critical incident or three unresolved Moderate incidents).

Critically, CAARA also protects developers and deployers by:

Setting clear thresholds for liability;

Requiring users to prove demonstrable harm and document reasonable safeguards (e.g., disclosures, context-sensitive filtering); and

Establishing a safe harbor for those who self-report and remediate in good faith.

CAARA reduces unchecked harm, limits arbitrary liability, and affirms California’s leadership in AI accountability.

Read More

Apr 15, 2025

Round 1 Submission

Our round one submission for the UC Berkeley AI Policy Hackathon

Read More

Apr 14, 2025

FlexHEG Devices to Enable Implementation of AI IntelSat

The project proposes using Flexible Hardware Enabled Governance (FlexHEG) devices to support the IntelSat model for international AI governance. This framework aims to balance technological progress with responsible development by implementing tamper-evident hardware systems that enable reliable monitoring and verification between participating members. Two key policy approaches are outlined: 1) Treasury Department tax credits to incentivize companies to adopt FlexHEG-compliant hardware, and 2) NSF technical assistance grants to help smaller organizations implement these systems, preventing barriers to market entry while ensuring broad participation in the governance framework. The proposal builds on the successful Intelsat model (1964-2001) which balanced US leadership with international participation through weighted voting.

Read More

Apr 14, 2025

Smart Governance, Safer Innovation: A California AI Sandbox with Guardrails

California is home to global AI innovation, yet it also leads in regulatory experimentation with landmark bills like SB-1047 and SB-53. These laws establish a rigorous compliance regime for developers of powerful frontier models, including mandates for shutdown mechanisms, third-party audits, and whistle-blower protections. While crucial for public safety, these measures may unintentionally sideline academic researchers and startups who cannot meet such thresholds.

To reconcile innovation with public safety, we propose the California AI Sandbox: a public-private test-bed where vetted developers can trial AI systems under tailored regulatory conditions. The sandbox will be aligned with CalCompute infrastructure and enforce key safeguards including third-party audits and SB-53-inspired whistle-blower protection to ensure that flexibility does not come at the cost of safety or ethics.

Read More

Apr 14, 2025

FlexHEG Devices to Enable Implementation of AI IntelSat

The FlexHEG policy proposal combines Treasury Department tax credits and NSF technical assistance grants to create the hardware-enabled verification infrastructure necessary for implementing an IntelSat-like governance model for AI, ensuring both large corporations and smaller organizations can participate in a framework that balances technological progress with responsible development.

Read More

Apr 14, 2025

A Call for ‘Green AI’ in California: Evaluation of Existing and Alternative Regulations

The rapid growth of artificial intelligence, especially large-scale foundation models, poses critical environmental challenges. Training compute-heavy AI models consumes massive energy and water, leading to significant carbon emissions. California, a leader in climate and AI policy, doesn't have any legislation explicitly addressing the strain AI development is causing to the resources. To bridge these gaps, we propose bipartisan legislative solutions that ensure AI development in California is environmentally sustainable without stifling innovation.

Read More

Apr 14, 2025

Data Trusts in AI Governance

Data trusts are an alternative model for data governance that prioritise data subjects and creators in AI development.

Read More