Keep Apart Research Going: Donate Today
Oct 6, 2024
LLM Agent Security: Jailbreaking Vulnerabilities and Mitigation Strategies
mohammed arsalan , Vishwesh bhat
Summary
This project investigates jailbreaking vulnerabilities in Large Language Model agents, analyzes their implications for agent security, and proposes mitigation strategies to build safer AI systems.
Cite this work:
@misc {
title={
LLM Agent Security: Jailbreaking Vulnerabilities and Mitigation Strategies
},
author={
mohammed arsalan , Vishwesh bhat
},
date={
10/6/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}