Mar 22, 2026
Grant Trust System
Shon Pan, Caleb Strom
This is the grant system of an effort to create a significantly more streamlined and automated research method that still centers around human intent. The basic idea is to use a system of costly signals as well as intentionality signs to filter out "AI slop" grant applications, and use it as a form of control to reduce misuse.
Costly real-world signals like shipped projects and open-source contributions provide adversarial robustness that text-only monitoring cannot is an interesting idea worth developing further. The system architecture is clearly documented and the red team attack surface analysis in Appendix D shows good adversarial thinking about how each layer could be defeated.
The main concern is fit with the AI control hackathon theme. The system addresses AI-generated text detection and grant quality assurance rather than the adversarial monitoring settings. On methodology, the four synthetic test cases constructed by the authors demonstrate the system's design logic but can't establish generalisation, and validating against a labelled corpus of real grant applications with ground-truth oversight levels, as noted in the paper, would be the essential next step.
Cite this work
@misc {
title={
(HckPrj) Grant Trust System
},
author={
Shon Pan, Caleb Strom
},
date={
3/22/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


