This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
AI capabilities and risks demo-jam: Creating visceral interactive demonstrations
66a7c53acd7d1c97a3b3dad0
AI capabilities and risks demo-jam: Creating visceral interactive demonstrations
August 26, 2024
Accepted at the 
66a7c53acd7d1c97a3b3dad0
 research sprint on 

Demonstrating LLM Code Injection Via Compromised Agent Tool

This project demonstrates the vulnerability of AI-generated code to injection attacks by using a compromised multi-agent tool that generates Svelte code. The tool shows how malicious code can be injected during the code generation process, leading to the exfiltration of sensitive user information such as login credentials. This demo highlights the importance of robust security measures in AI-assisted development environments.

By 
Kevin Vegda, Oliver Chamberlain, William Baird
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission is under review.
Oops! Something went wrong while submitting the form.

This project is private