Nov 23, 2025
ChaCha: A Control Plane for Longitudinal Threat Detection in LLM Applications
Taishi Nagae, Conor Gould
Most LLM guardrails still treat safety as a per-request classification problem, even though real incidents like jailbreaks, reconnaissance, and data exfiltration typically emerge as behavioural patterns over time. ChaCha addresses this by acting as a longitudinal behavioural control plane: a lightweight SDK records messages, tool calls, CoT snippets, and decisions, and streams them to a multi-tenant backend that maintains per-session and per-identity state. LLM-based detectors analyse these evolving histories to produce a compact risk view that can drive real-time policies (block, throttle, escalate) and longer-horizon monitoring. Developers integrate ChaCha with a few lines of code, while security teams use a dashboard and policy engine to understand and manage AI risk. ChaCha is already structured as a product-ready, model-agnostic platform, and we plan to validate and harden it with initial design partners who are deploying high-stakes LLM workflows.
No reviews are available yet
Cite this work
@misc {
title={
(HckPrj) ChaCha: A Control Plane for Longitudinal Threat Detection in LLM Applications
},
author={
Taishi Nagae, Conor Gould
},
date={
11/23/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


