Jan 20, 2025
RestriktAI: Enhancing Safety and Control for Autonomous AI Agents
João Lucas Duim, Juan Belieni
This proposal addresses a critical gap in AI safety by mitigating the risks posed by autonomous AI agents. These systems often require access to sensitive resources that expose them to vulnerabilities, misuse, or exploitation. Current AI solutions lack robust mechanisms for enforcing granular access controls or evaluating the safety of AI-generated scripts. We propose a comprehensive solution that confines scripts to predefined, sandboxed environments with strict operational boundaries, ensuring controlled and secure interactions with system resources. An integrated auditor LLM also evaluates scripts for potential vulnerabilities or malicious intent before execution, adding a critical layer of safety. Our solution utilizes a scalable, cloud-based infrastructure that adapts to diverse enterprise use cases.
Finn
some good ideas in there, I think the concepts could be a bit more clearly seperated, what is just automated red teaming vs. what is the proposed technical solution for the sandbox environment. Would look more strongly in either direction and provide more code. I would ask myself where my strengths are and which solution or approach (you also mentioned some governance elements in there) are best suited to be persuid by the team.
Jaime Raldua
I like the overall direction of RestriktAI, but I have mixed feelings about their approach. Running LLMs in sandboxes isn't really groundbreaking, and using another LLM as an auditor is something we've seen before. That said, I'm pretty excited about their idea of developing a custom scripting language specifically for AI use, that's actually a fresh take that could open up some interesting possibilities for safer AI interactions.
The SQL injection demo is cool and shows the system works for current threats. The project seems too focused on immediate issues like jailbreaking, which might not be as relevant when we're dealing with more capable AI systems. While it's a solid starting point for improving current AI safety, I'm not fully convinced it'll scale well to address the deeper, more complex safety challenges.
Cite this work
@misc {
title={
@misc {
},
author={
João Lucas Duim, Juan Belieni
},
date={
1/20/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}