Nov 23, 2025
LLM Security Evaluation
Swaleha Parveen
SafeGuardLLM is a AI Security & Safety evaluation framework designed to systematically identify, measure, and analyze vulnerabilities in Large Language Models (LLMs). As LLMs become integrated into critical systems, understanding their failure modes under adversarial pressure is essential. SafeGuardLLM addresses this need by providing a scalable, modular, and empirical testing platform for evaluating model robustness.
SafeGuardLLM supports multi-provider testing, reproducible scoring, and vulnerability benchmarking. For APART, it offers a scalable platform to explore model failure modes, compare architectures, and advance scientific understanding of robust, trustworthy, and safe AI systems.
No reviews are available yet
Cite this work
@misc {
title={
(HckPrj) LLM Security Evaluation
},
author={
Swaleha Parveen
},
date={
11/23/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


