Mar 23, 2026
Structured Ethical Justification Protocol (SEJP): Ordained-Ethics Enforcement and Reasoning-Alignment Monitoring for AI Systems
Shaker Funkhouser
We apply invariant moral calculations to detect and block scheming agents.
They constrain the action space so only the morally correct action is available instead of monitoring. No LLM inference needed. Audit component checks if stated reasoning matches the math. Interesting approach, retry exhaustion finding is useful. But only works when actions are finite and enumerable which rules out most of what agentic AI actually does. Guard claiming 100% block rate when it only allows one action isn't really a result. Small samples everywhere.
There's a disconnect between the high-level language of "ethics" and "morals" employed here and the mechanical implementation of verifiable outcomes. That said, measuring self-consistency of professed justifications in scenarios that lend themselves to semi-objective evaluations of this form is an interesting approach and deserves further exploration. It would also be interesting to explore if and how well the retry exhaustion pattern generalizes.
Cite this work
@misc {
title={
(HckPrj) Structured Ethical Justification Protocol (SEJP): Ordained-Ethics Enforcement and Reasoning-Alignment Monitoring for AI Systems
},
author={
Shaker Funkhouser
},
date={
3/23/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


