Mar 10, 2025

An Interpretable Classifier based on Large scale Social Network Analysis

Monojit Banerjee

Mechanistic model interpretability is essential to understand AI decision making, ensuring safety, aligning with human values, improving model reliability and facilitating research. By revealing internal processes, it promotes transparency, mitigates risks, and fosters trust, ultimately leading to more effective and ethical AI systems in critical areas. In this study, we have explored social network data from BlueSky and built an easy-to-train, interpretable, simple classifier using Sparse Autoencoders features. We have used these posts to build a financial classifier that is easy to understand. Finally, we have visually explained important characteristics.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The project addresses an important problem by trying to make financial sentiment analysis more interpretable. The use of SAEs and decision trees is a reasonable approach for achieving interpretability. The paper is clearly written and structured, but could do with more details on implementation and the threat model, which is currently somewhat vague - a more detailed discussion of potential failure modes and mitigation strategies would strengthen the AI safety aspect. It would also be valuable to include a more thorough error analysis and discussion of the limitations of the approach. I'd encourage you to dive a bit more into the literature on mech interp to help with the analysis of the SAE features, and/or adversarial robustness in NLP to inform strategies for making similar systems more resilient to manipulation.

Good work over the hackathon - well done, and interesting approach using a novel social media platform! I've seen references to current research re. SAEs and decision tree classifiers, I'd recommend engaging with the available literature more thoroughly - potentially with a focus on finance. It's definitely interesting to think about the interpretability of sentiment analysis but moving forwards, I'd spend some time detailing all potential applications and impacts to the AI safety space. As is stated in the 'Discussion and Conclusion' section, future work should use data outside of just BlueSky / social media to broaden the research - I think this should be a priority. It would also be interesting to compare these results to more traditional interpretability techniques.

Cite this work

@misc {

title={

An Interpretable Classifier based on Large scale Social Network Analysis

},

author={

Monojit Banerjee

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.