Nov 23, 2025

LLM-ExecGuard - Real-Time Detection of Malicious Shell Behavior in LLM Agents

Nitzan Shulman, Yael Landau, Liran Markin, Matan Sokolovsky, Gal Wiernik, Alon Wolf

LLM-ExecGuard is a real-time monitoring system designed to detect malicious behavior by LLM-based agents operating in shell environments. It observes the commands an agent executes and evaluates them using Sigma rules mapped to MITRE ATT&CK tactics. Our prototype hooks into the terminal stream, logs commands outside the agent’s container, and assigns each session a verdict (BENIGN → MALICIOUS) with an associated suspicion score. We implemented high-signal rules (covering credential access, persistence, privilege escalation, defense evasion, and exfiltration) and showed they can reliably flag simple malicious agents with low latency. As part of red-teaming our own system, we also succeeded in crafting an agent that exfiltrates data while completing its assigned task and evading our current ruleset, demonstrating both the promise and limits of rule-based detection.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Well written report with a clear security angle. I’d like to see a bit more context on where this would actually be deployed. What harm is it best at preventing? What industries is it best suited for? Using existing security standards is great, but without a deployment story it is hard to see how this gets adopted.

It would also help to include a quick comparison to similar work so it’s clear what is new here and why this approach might be better.

I agree with the next steps you outlined in the report, especially working to block high-risk commands before execution. Given how many other approaches are able to use this work flow, I think it would make this project much more appealing.

Nice prototype and use of Sigma/MITRE; it feels like a natural fit for SOC workflows. It also seems to overlap with ongoing work on LLM agent isolation and command-level safeguards (sandboxed shells, approval flows, allow/deny lists). It would be great to see future iterations add role-/agent-aware policies so the system can better distinguish malicious behavior from legitimate system/security activity.

Strengths: This is exactly what d/acc should look like! The team smartly demonstrated that human attacker detection frameworks transfer to AI agent monitoring because malicious objectives (credential theft, persistence, exfiltration) manifest through identical shell patterns regardless of who's issuing the commands. The TeeLogger architecture is a real contribution. Sub-second detection latency makes this production-viable. And the red-team section where you bypassed your own system is intellectually honest and valuable. Most teams didn’t publish their own failure modes.

Susggestions: The limitations you identified (command obfuscation, fileless attacks, no role-based context) are the natural next steps. Integration with network telemetry may close the fileless attack gap, as a possible future step.

POV from a Halcyon Ventures investor: This maps directly to our interest in AI agent security, specifically around next-gen malware detection. The SIEM integration path is smart. Would love to see this mature toward agent monitoring infrastructure that enterprises can deploy at scale. Great work!

Cite this work

@misc {

title={

(HckPrj) LLM-ExecGuard - Real-Time Detection of Malicious Shell Behavior in LLM Agents

},

author={

Nitzan Shulman, Yael Landau, Liran Markin, Matan Sokolovsky, Gal Wiernik, Alon Wolf

},

date={

11/23/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.