Mar 22, 2026

Beyond the Blocklist: Using Character Aliases to Bypass AI Image Safety

Anidipta Pal

Current text-to-image (T2I) safety protocols primarily rely on string-matching blocklists to prevent the generation of copyrighted entities, non-consensual deepfakes, and protected public figures. However, these syntactic defenses often fail to account for the deep relational knowledge embedded within multimodal models during pre-training. This paper investigates Semantic Alias Mapping—a red-teaming technique where protected entities are retrieved via indirect conceptual proxies, such as requesting ``Peter Parker'' to obtain high-fidelity likenesses of ``Tom Holland.'' We provide a systematic evaluation of this vulnerability across six frontier generative models, demonstrating that while direct name-based requests are successfully filtered, alias-based prompts achieve a high success rate while preserving target identity. Our empirical results highlight a positive correlation between text-encoder capacity and vulnerability to alias evasion. These findings suggest that robust AI control requires deprecating surface-level string matching in favor of intent-aware latent interventions and output-space verification.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

- at some level, being concept blind is fine for copyright, though obviously not ok for NSFW control. the Spiderman example depends on additional prompts given - there have been multiple, and that brand overall may also be copyrighted? paper conflates generating a real person's likeness (deepfake concern) vs generating copyrighted characters (IP concern). the alias trick only really matters for the first one. that said the entaglement is real and its an interesting, if narrow, threat model

- how did you measure identity preservation? how are your metrics calculated? S_ID is used but never properly defined. no mention of face recognition model, actual threshold values, how reference images were chosen - code would've been fine here, and a reference to it in the submission

- nice presentation on equation 1 and overall for math notation, very intuitive and explain the concepts clearly. this seems to be the greatest strength, and express the results clearly. reads overall like a proper paper, well structured, clean digures. they could benefit from more explanation, and given a bit of time for a "so what"

- the "AI control" / "untrusted agent" framing is more dramatic than the evidence supports

- it's unclear what the wider threat model is outside of this narrow example - and how much of a problem is this? it "seems intuitive" that we need concept / semantically aware safety monitors but its not made clear broadly how this contributes. the future work section suggests ways to counteract this (and that seems like a well fleshed out idea) but this is skipping ahead. what does alias mapping look like in other contexts? can you think of some simple cross-domain uses of T2I models where this is problematic? would be great to have more context here wider than related works

- core insight (blocklists are syntactic, models think semantically) is sound but not super novel basically. the related work section itself cites papers making the same point. "type the character name instead of the actor name" is something casual users have probably discovered on their own

- AliasBench-50 seems like a reasonable contribution based on the work

The implementation of this project is well done. I think this focused on one example of how steganography and collusion are important aspects to study in AI control and could be framed as an investigation of schelling points in control protocols.

Cite this work

@misc {

title={

(HckPrj) Beyond the Blocklist: Using Character Aliases to Bypass AI Image Safety

},

author={

Anidipta Pal

},

date={

3/22/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.