This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Interpretability Hackathon
Accepted at the 
 research sprint on 

An Informal Investigation of Indirect Object Identification in Mistral GPT2-Small Battlestar

This report represents an informal investigation of an IOI circuit within the Mistral GPT2-Small x49 Battlestar transformer model, inspired by the work performed by Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris and Jacob Steinhardt at Redwood Research in their Interpretability in the Wild paper and the mechanistic interpretability work conducted by Neel Nanda.

By 
Chris Mathwin
🏆 
4th place
3rd place
2nd place
1st place
 by peer review