Nov 23, 2025
LLM-ExecGuard - Real-Time Detection of Malicious Shell Behavior in LLM Agents
Nitzan Shulman, Yael Landau, Liran Markin, Matan Sokolovsky, Gal Wiernik, Alon Wolf
LLM-ExecGuard is a real-time monitoring system designed to detect malicious behavior by LLM-based agents operating in shell environments. It observes the commands an agent executes and evaluates them using Sigma rules mapped to MITRE ATT&CK tactics. Our prototype hooks into the terminal stream, logs commands outside the agent’s container, and assigns each session a verdict (BENIGN → MALICIOUS) with an associated suspicion score. We implemented high-signal rules (covering credential access, persistence, privilege escalation, defense evasion, and exfiltration) and showed they can reliably flag simple malicious agents with low latency. As part of red-teaming our own system, we also succeeded in crafting an agent that exfiltrates data while completing its assigned task and evading our current ruleset, demonstrating both the promise and limits of rule-based detection.
Well written report with a clear security angle. I’d like to see a bit more context on where this would actually be deployed. What harm is it best at preventing? What industries is it best suited for? Using existing security standards is great, but without a deployment story it is hard to see how this gets adopted.
It would also help to include a quick comparison to similar work so it’s clear what is new here and why this approach might be better.
I agree with the next steps you outlined in the report, especially working to block high-risk commands before execution. Given how many other approaches are able to use this work flow, I think it would make this project much more appealing.
Nice prototype and use of Sigma/MITRE; it feels like a natural fit for SOC workflows. It also seems to overlap with ongoing work on LLM agent isolation and command-level safeguards (sandboxed shells, approval flows, allow/deny lists). It would be great to see future iterations add role-/agent-aware policies so the system can better distinguish malicious behavior from legitimate system/security activity.
Strengths: This is exactly what d/acc should look like! The team smartly demonstrated that human attacker detection frameworks transfer to AI agent monitoring because malicious objectives (credential theft, persistence, exfiltration) manifest through identical shell patterns regardless of who's issuing the commands. The TeeLogger architecture is a real contribution. Sub-second detection latency makes this production-viable. And the red-team section where you bypassed your own system is intellectually honest and valuable. Most teams didn’t publish their own failure modes.
Susggestions: The limitations you identified (command obfuscation, fileless attacks, no role-based context) are the natural next steps. Integration with network telemetry may close the fileless attack gap, as a possible future step.
POV from a Halcyon Ventures investor: This maps directly to our interest in AI agent security, specifically around next-gen malware detection. The SIEM integration path is smart. Would love to see this mature toward agent monitoring infrastructure that enterprises can deploy at scale. Great work!
Cite this work
@misc {
title={
(HckPrj) LLM-ExecGuard - Real-Time Detection of Malicious Shell Behavior in LLM Agents
},
author={
Nitzan Shulman, Yael Landau, Liran Markin, Matan Sokolovsky, Gal Wiernik, Alon Wolf
},
date={
11/23/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


