Aug 27, 2024
Demonstrating LLM Code Injection Via Compromised Agent Tool
Kevin Vegda, Oliver Chamberlain, William Baird
Summary
This project demonstrates the vulnerability of AI-generated code to injection attacks by using a compromised multi-agent tool that generates Svelte code. The tool shows how malicious code can be injected during the code generation process, leading to the exfiltration of sensitive user information such as login credentials. This demo highlights the importance of robust security measures in AI-assisted development environments.
Cite this work:
@misc {
title={
Demonstrating LLM Code Injection Via Compromised Agent Tool
},
author={
Kevin Vegda, Oliver Chamberlain, William Baird
},
date={
8/27/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}