Jan 20, 2025

Neural Seal

Kailash Balasubramaniyam, Mohammed Areeb Aliku, Eden Simkins

Neural Seal is an AI transparency solution that creates a standardized labeling framework—akin to “nutrition facts” or “energy efficiency ratings”—to inform users how AI is deployed in products or services.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The proposal introduces Neural Seal, an AI transparency solution that creates a standardized labeling framework of “nutrition facts” that inform users on the level of AI involvement in products or services. Visibility for AI involvement is limited but necessary in many domains like finance, healthcare, and social media, and Neural Seal aims to create a universal labeling standard for AI transparency. The solution involves a standardized structural evaluation with a multi-step questionnaire on the level of AI usage and generating a color-coded rating (A/B/C/D) that represents AI involvement in a digestible manner. The team showcases a demo build of the questionnaire and mockups of the labeling system, along with a yearly plan for building a prototype, integrating interpretability techniques, and widespread adoption of the standardization metrics.

I think this is a great idea! The proposal highlights an important need and the team understands that the consumer-facing aspect of their product requires simple and intuitive metrics and ratings. Some things to consider that can greatly strengthen the proposal:

1. Different industries and use-cases can be classified as low/medium/high risk and might require different rubrics to assess the impact of AI involvement

2. Some guidelines or information on how the level of AI involvement is categorized can be helpful for companies filling out the proposal

3. Details on the breakdown of different areas and how the AI usage in these areas is categorized, along with how the impact score can be calculated would add a strong algorithmic component to the proposal

4. The vision of creating a singular standardized metric across all industries might require extended research. I would suggest starting with a few use cases and showing some proof of concept for those areas (which might require different metrics, language, and jargon that the area-specific target users are familiar with) and using the insights to inform what a universal standardized metric might look like.

5. Some visual indication of the AI involvement (like chemical/bio/fire hazard symbols on instruments and chemicals) can be an additional way to showcase the rating in an accessible manner.

6. On the technical side, using a react-based framework can be relatively easy to implement and flexible to modify and build upon, maybe using Claude can be helpful since it can natively render react components.

7. For explainable AI methods, it might be important to consider other interpretability approaches that have shown great promise for large language models (like patching, probing, SAEs, etc) since future systems will most definitely include an LLM/multimodal AI-based agent, and explainable AI methods like LIME/SHAP that are compliance supported might not be sufficient to explain the inner working of these highly capable systems.

I might have more comments and suggestions, so feel free to reach if you need any further feedback. Good luck to the team!

I like the idea of pushing for this scale for end-consumer use. A few points: What are the incentives from relevant stakeholder? It seems like the current solution is only a questionnaire. Right now the only reason why someone at t company fills out a questionnaire is compliance or PR. Both are currently not given, so time might be best spend to look at where legislation is on this and who might be closest to adopting such a standard and what is in the way. I really like the mention of LIME and SHAP, I think it would have been very cool to have them included in the demo/product. That would seems like the most interesting part of this project, unless you are very keen and involved on the policy side of things. With this project there is definitely a risk of just becoming unnessecary overhead for the adoption of normal AI, while not addressing actual existential risk (like the cookie banner), so that would need to be kept in mind. Would spend most of my energy in finding someone who this is useful for, maybe in policy or maybe for some internal AI governance/risk teams at large enterprises.

Cite this work

@misc {

title={

Neural Seal

},

author={

Kailash Balasubramaniyam, Mohammed Areeb Aliku, Eden Simkins

},

date={

1/20/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.