Oct 7, 2024
Inference-Time Agent Security
Nicholas Chen
We take a first step towards automating model building for symbolic checking (eg formal verification, PDDL) of LLM systems.
Pranjal Mehta
The focus on inference-time safety is both timely and crucial, and this project does a fantastic job of exploring new ways to keep AI agents secure during operation. It’s a forward-looking project that shines a light on how to maintain security as agents perform their tasks, making it a valuable asset in the world of AI safety!
Jason Schreiber
Interesting framework - I’d love to see its strengths and weaknesses explored by application to a real-life use case.
Ankush Garg
The methodology proposed in this very sound and novel. Combining formal reasoning with AI and “progressively” building world models for Agent security is both innovative and practical. I would love to see some practical use cases/ case studies in future. Some quantitative metrics or benchmarks for evaluating the effectiveness of the safety system would strengthen the idea.
Andrey Anurin
This work proposes an interesting approach to incremental world model building, with the hope of keeping it formally verifiable, and presents a demo. It seems that the strong foundational claim of symbolic reasoning being feasible in agent contruction in the first place remains unexamined, especially when covered by an extra layer of LLM reasoning.
Abhishek Harshvardhan Mishra
This project presents an intriguing framework for improving the safety of LLM-based agents by incorporating symbolic reasoning and incremental world model building. The integration of neuro-symbolic techniques, leveraging the strengths of both LLMs and formal methods, is particularly promising. The submission is not complete though, just basic ideas in a repo.
Cite this work
@misc {
title={
@misc {
},
author={
Nicholas Chen
},
date={
10/7/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}