Feb 1, 2026
Cross-Border Agentic AI Compliance (CBAAC): Embedding Regulatory and Cultural Risk Compliance into Agentic Communication
Matt Pagett, Tomoko Mitsuoka
AI regulations (EU AI Act, Japan METI, Korea AI Basic Act) require providers to certify compliance — but how can businesses verify that the agents they use, and sub-agents in the chain, actually comply? Current approaches rely on costly audits, additional external agreements, or trust — and do not scale well to a world of millions of agents which can spawn on demand.
We propose demand-side verification: a protocol enabling agents to automatically check compliance of other agents before sharing data. Our framework supports regulatory compliance (GDPR, AI Act, GPAI) and optional cultural/ethical benchmarks addressing behavioral risks such as unconsented user profiling and emotional manipulation. We extend Project NANDA's AgentFacts schema with self-certification questionnaires for EU, Japan, and Korea jurisdictions, plus cultural competency assessments addressing behavioral risks documented in AI governance failures. We provide an open implementation and demo at [https://cross-border-agentic-compliance.solve.it.com/].
Keywords: Multi-agent alignment, AI security, compliance infrastructure
Well-scoped project! Liked the demand-side verification approach and the cultural/behavioral safeguards angle is a nice addition many governance frameworks miss. Next steps would be pushing toward even one fully functional attestation tier (with real signing, not mocked) and making the behavioral criteria testable. Great work for a sprint!
The low evidence gained from self-certification is further degraded by agentic endpoints which do not actually know, themselves, the true answer to how user data will be handled. I share excitement about third-party audited approaches. I’d like to understand better how TEE-based runtime attestation works, and how it would integrate with this approach.
I would be interested to understand what implications this approach might have on system latency; if users will be left waiting for a long time for agents to down and up the chain sharing attestations.
I’m not convinced that the full subjectivity of cultural compliance is appreciated here. There could be important differences in cultural expectations between different contexts even within regions like Japan. With unlimited resources, I can imagine using agents to verify this through mock interactions before proceeding to actual users. I can also imagine a trusted and competition assistant could act as a cultural translator to insulate the user from offense as well as inform downstream agents of cultural priorities.
Cite this work
@misc {
title={
(HckPrj) Cross-Border Agentic AI Compliance (CBAAC): Embedding Regulatory and Cultural Risk Compliance into Agentic Communication
},
author={
Matt Pagett, Tomoko Mitsuoka
},
date={
2/1/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


