Nov 23, 2025
Adaptive AI Security Mesh
Tushal Chandwani, Rudra Pratap Singh, Sahil Aman
Adaptive AI Security Mesh providing rapid, proactive defense for multi-model (LLM) environments against jailbreaks, prompt injection, and behavioral drift. It unifies Threat Intelligence ingestion, autonomous Red Team variant generation, dynamic Policy synthesis, real-time Probe execution, Guardian enforcement, and Auto-Mitigation workflows over an Untrusted Model Registry and centralized Security State.
Core Engines
* ThreatIntelligence: Detects emerging attack patterns (e.g., “DAN 12.0”).
* PolicyGenerator: Converts anomalies + intel into executable guardrails.
* RedTeamAI: Expands new threats into diverse mutated prompt variants.
* ModelProbeEngine: Measures model responses for compliance and leakage.
* GuardianAI: Real-time decision (allow / block / escalate / sanitize).
* AutoMitigation: Immediate quarantine, capability throttling, rollback.
* UntrustedModelRegistry: Provenance + trust scoring loop.
ValueShifts from static, manual rules to self-evolving governance; compresses detection-to-mitigation from hours/days to minutes; maintains high resilience with low false-positive impact through iterative feedback refinement.
Jailbreak Response Timeline (Example)
* T+0: New “DAN 12.0” jailbreak appears publicly.
* T+2 min: Pattern ingested; counter-rule generated; 50 mutated variants produced.
* T+5 min: Variants probed; GuardianAI enforcement active; successful bypasses blocked.
* T+30 min: Thresholds tuned, false positives pruned.
* T+Hours: Live attacker attempts → consistently BLOCKED; telemetry archived.
No reviews are available yet
Cite this work
@misc {
title={
(HckPrj) Adaptive AI Security Mesh
},
author={
Tushal Chandwani, Rudra Pratap Singh, Sahil Aman
},
date={
11/23/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


