Feb 2, 2026

DOMAIN OWNERSHIP PROBING

Rijal Saepuloh, Ifeoma Ilechukwu, Valmik nahata, Saahir Vazirani

We propose Domain Ownership Probing (DOP), a lightweight verification method that evaluates a model’s internal representation structure instead of its stochastic text outputs. DomainProbe embeds domain-specific statements, forms prototype centroids, and computes domain ownership win-rate and cohesion to assess whether knowledge domains are consistently encoded. By adding an auto-tuned layer search, the method remains effective for both encoder models and decoder-only LLMs, supporting practical AI governance and compliance verification without exposing training datasets.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The team clearly shows technical aptitude — the pipeline is well-constructed and the implementation appears sound. The core problem is that what this method demonstrates is that embeddings of topically similar statements are close together in embedding space, which is a well-established property of language model representations. It's not clear that this gives us a way to distinguish between safe models and unsafe models — only between good models and bad models. Any competent model will cluster biology statements together; one that doesn't is simply poorly trained. My core recommendation would be to think about what kind of probes would yield actionable information for auditors, and work backward from there.

Thank you for pointing to embedding geometry as a stable and privacy-preserving signal for capability verification. I am excited about work that enables model assessment without exposing proprietary data or relying on gameable benchmarks. The main gap is in connecting this geometric signal to dangerous capabilities evaluation. A model clustering biology probes together tells us it has structured representations of biology, but it does not tell us whether the model can provide bioweapons development uplift.

Cite this work

@misc {

title={

(HckPrj) DOMAIN OWNERSHIP PROBING

},

author={

Rijal Saepuloh, Ifeoma Ilechukwu, Valmik nahata, Saahir Vazirani

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.