Mar 23, 2026
Shadow Commit
Yash Thapliyal
AI agents lie. ShadowCommit catches them.
When an untrusted AI agent proposes a bash command, it also submits a signed commitment: "I will only touch these files, read no secrets, make no network calls." ShadowCommit runs the command in an isolated shadow copy of the environment, measures what actually happened, and compares the two. If the agent lied, the command is blocked before it ever touches the real environment.
Summary: The project aims to prevent malicious attacks (bash commands) by (1) getting agents to make commitments about commands that are run and its consequences (e.g., which files will be modified) and (2) executing the command in a sandbox environment and comparing the commitment to the actual effects of the command in the sandbox. This protocol outcompetes text monitoring on malicious commands that have been obfuscated.
Strengths: I like the protocol of commitment + validation --- I believe it's novel and interesting! The report is also really well presented and complete.
The main limitation seems to be the performance. So far the protocol has to run every command in a sandbox which is unrealistically costly. A good cost analysis could help. Additionally, I think text monitors could be improved substantially (e.g., by prompting the monitor to flag commands that it does not understand).
Good idea overall, but without sys-call level observation it's going to be hard to provide anything but a false sense of security.
Cite this work
@misc {
title={
(HckPrj) Shadow Commit
},
author={
Yash Thapliyal
},
date={
3/23/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


