Jul 28, 2025

Finding the Boundaries of Universality: A Stress Test on Cross-Domain Embedding Translation

Aksinya Bykova, Evegeni Golovanov

Natural abstractions investigation of limits via vec2vec

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This project investigates geometric alignment between text embedding spaces with models trained on data for different domains. I think the connection made to the idea of natural abstractions is interesting. Furthermore, this project relates to the important open question of the extent to which general-purpose capabilities and representations can arise from training on domain-specific data. I would have liked to see the project more deeply integrate methods from physics or AI alignment theory (such as natural latents).

This work tests the hypothesis that training models on distinct datasets will prevent leakage of unintended data for embedding-only databases.

I believe this security issue is not an issue for AI safety, and would be resolved by normal market forces (ie the company would solve this problem to not get sued). Additionally, failure would not lead to catastrophic results.

With that said, we can focus on the security issue itself. The proposed solution is for an organization (providing potentially confidential information through embedding data) to have distinct datasets. However, if an attacker is wanting to learn confidential eg medical data from an embedding database, they can train their own model on general medical data to do so.

A more direct method to prevent this security issue would look like a different project than this one.

In general, for a natural abstractions hypothesis paper, novel work can be done but with a greater focus on defining similarity metrics between abstractions from differerent models.

Table 2 is difficult to interpret & I don't see translations of terms in the main text (I assume reading the jha et al paper would clarify it for me).

I didn't see the t-SNE projection. Would be good to include that as a figure.

There are some results inconsistency the conclusion states Top‑1 ≈ 12.7% and cosine ≈ 0.28, yet Table 2 shows med→leg and leg→med with Top‑1 = 0.00 and cosine ≈ 0.17–0.20; additionally, the table caption references NQ (Natural Questions) rather than medical/legal, suggesting a copy‑over or mislabel.he paper does not specify dataset composition (corpora names, sizes), preprocessing, sampling, the evaluation set (token/phrase pairs? retrieval protocol?), number of runs/seeds, or variance/error bars. The “≥90% unrelated outputs” claim is not supported by a described annotation protocol. Visualization claims lack detail. The t‑SNE “no linear relationship” result is plausible, but t‑SNE can be misleading without parameters (perplexity, seeds) and controls; none are described. The result is hower clearly relevant to ai safety but not really physics motivated.

This is a fantastic idea! Establishing empirical evidence for the validity of non-overlapping training domains (i.e. expert general models) means that we can improve robustness to various structured attacks across implementation domains. Give me 100 more papers like this and we can establish a complete field of general tool-AI as replacements for insecure and unsafe AGI models like ChatGPT-4o and beyond.

Obvious considerations in extended work for whether it truly works when trained on larger datasets with more compute and whether it generalizes to more capable models (which would be necessary to displace existing models in e.g. law and medicine). Fantastic implications though, and I suggest you continue the work. Great to relate it methodologically to existing work as well.

Cite this work

@misc {

title={

(HckPrj) Finding the Boundaries of Universality: A Stress Test on Cross-Domain Embedding Translation

},

author={

Aksinya Bykova, Evegeni Golovanov

},

date={

7/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.