Jul 27, 2025

Spectral Regularization as a Safety-Critical Inductive Bias

shivam dubey

This project

introduces Fourier Gradient Regularization (FGR), a novel, physics-inspired

training method that directly addresses this vulnerability. By penalizing the

high-frequency components of the model's input-gradients during training

analogous to a coarse-graining procedure in physics FGR induces a

"smoothness" prior, forcing the model to become less sensitive to the very

perturbations adversaries exploit.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

The submission proposes and studies a novel regularization method for adversarial robustness, based on adding an auxiliary loss term constructed from the high-frequency components of the loss gradient. Adversarial robustness is an interesting topic, and the project finds some positive preliminary results for the proposed method. The project could be improved by discussing more explicitly the connection between the regularization of high-frequency components in the loss, and the high-frequency noise structure in data that’s thought to enable adversarial examples.

The author proposes that altering models’ training objectives by adding a loss term penalizing their sensitivity to high-frequency input changes could make them more robust to adversarial perturbations without substantially penalizing performance. The intuition is that such a loss term would force models to learn smoother functions that are intrinsically less exploitable by pixel-scale noise. The author tested this idea on a ResNet model trained on CIFAR-10 and found that it achieved 102% relative improvement in adversarial robustness with a negligible accuracy loss.

This was perhaps the most mature of the entries that I reviewed. The paper presents a clear AI safety motivation (the problem of adversarial vulnerability), proposes to address it with a mathematically precise physics-motivated regularizer, implements it in code, reports a clear-cut improvement on safety metrics and suggests a well-motivated set of next steps (scaling to other datasets/models and head-to-head comparison with other adversarial defenses). I think it’s a promising candidate for continued research beyond this hackathon.

This is a tidy project that adds a high-frequency fourier mode regularizer to the loss, claiming shocking gains in robustness compared to a baseline model. The physics connection is sound, but the framing could use some work. Spectral bias is a training phenomenon; your penalty acts on the final decision surface to reinforce this by smoothing out wrinkles learned later in training. While the safety connection -- robustness by design -- is clear, more focus could have been paid to what kinds of adversarial examples this approach filters out. The author also missed relevant literature i) questioning the spectral bias hypothesis and ii) applying similar techniques to hidden layers or input gradients (e.g., Ross et al. 2017, Yin et al. 2019). This doesn't mean the work has already been done, but could have helped to contextualize the results. Finally, the methodology and plots are largely unexplained (e.g., the regularizer weight, type of attack), adding to a loss in confidence of these results.

This project offers a highly effective, physics-inspired approach to enhance adversarial robustness, directly addressing a critical AI safety concern with strong empirical evidence.

An interesting approach that seems intuitively reasonable. The main shortcomings are in the experimental scope. Namely, comparison directly with adversarial training of different types is important, as well as conceptually similar regularizes (gradient penalties, Jacobian norm, Sobolev training, spectral norm constraints). Furthermore, one should benchmark robustness against stronger attacks (CW, autoattack) as PGD is relatively weak.

Cite this work

@misc {

title={

(HckPrj) Spectral Regularization as a Safety-Critical Inductive Bias

},

author={

shivam dubey

},

date={

7/27/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.