Sep 15, 2025
Policy Brief: Harnessing Open-Source Intelligence for AI Risk Management
Michał Kubiak
Artificial intelligence is advancing amid high uncertainty anddiverse risks, ranging from malicious uses such as AI-drivencyberattacks and CBRN threats to failures like hallucinations andsystemic impacts on labour markets and privacy. Yet only a smallfraction of global AI research addresses safety, creating anevidence dilemma in which regulators must act with limited data orrisk being overtaken by sudden capability leaps. Open-sourceintelligence (OSINT) platforms on AI risk offer a practical solutionby aggregating technical documentation, model benchmarks,incident reports, and safety practices to enable continuous,transparent, and shareable risk assessment. Integrating these toolsinto policymaking, regulatory oversight (e.g., EU AI Act), nationalsecurity planning, and public-sector innovation can enhancesituational awareness, strengthen compliance, foster public trust,and guide safe AI research and deployment.
Key strengths of the approach
Clear articulation of the field and why more evals are needed. Engages with literature and existing OSINT to a good degree. Overall, the paper is based on a lot of grounded facts and has a bold vision.
Specific areas for improvement
1) Unfortunately, the paper lacked solution-orientedness. It felt like the paper talked about the need for evals and the OSINT space separately from each other. It also felt like the limitations of using OSINT were not touched on.
Limited discussion of CBRN-specific risks and how this approach could help. Moreover. organizations such as Partnership on AI also have an incident database with governments as stakeholders. Maybe this paper could have been stronger if it explained how OSINT was going to help develop standards or how it would beat other warning systems in terms of speed or otherwise.
Suggestions for how to develop this into a stronger project
Narrow focus to one high-value CBRN use case to show real impact.
This paper could be of interest on how to develop reoporting frameworks: [https://arxiv.org/html/2501.17037v1](https://arxiv.org/html/2501.17037v1) , [https://fas.org/publication/establishing-an-ai-incident-reporting-system/ ](https://fas.org/publication/establishing-an-ai-incident-reporting-system/ )
Maybe looking into something like the UN High Level Panel which mentions Horizon scanning in their report and/or leveraging the network of AISIs in the world could be interesting. Another direction could be that countries in the global south use this OSINT technology since they dont have the means to develop their own independent monitoring ecosystem.
Cite this work
@misc {
title={
(HckPrj) Policy Brief: Harnessing Open-Source Intelligence for AI Risk Management
},
author={
Michał Kubiak
},
date={
9/15/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


