Apr 15, 2025
Recommendation to Establish the California AI Accountability and Redress Act
Daniel Vennemeyer, Phan Anh Duong
Summary
We recommend that California enact the California AI Accountability and Redress Act (CAARA) to address emerging harms from the widespread deployment of generative AI, particularly large language models (LLMs), in public and commercial systems.
This policy would create the California AI Incident and Risk Registry (CAIRR) under the California Department of Technology to transparently track post-deployment failures. CAIRR would classify model incidents by severity, triggering remedies when a system meets the criteria for an "Unacceptable Failure" (e.g., one Critical incident or three unresolved Moderate incidents).
Critically, CAARA also protects developers and deployers by:
Setting clear thresholds for liability;
Requiring users to prove demonstrable harm and document reasonable safeguards (e.g., disclosures, context-sensitive filtering); and
Establishing a safe harbor for those who self-report and remediate in good faith.
CAARA reduces unchecked harm, limits arbitrary liability, and affirms California’s leadership in AI accountability.
Cite this work:
@misc {
title={
Recommendation to Establish the California AI Accountability and Redress Act
},
author={
Daniel Vennemeyer, Phan Anh Duong
},
date={
4/15/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}