This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Interpretability Hackathon
Accepted at the 
Interpretability Hackathon
 research sprint on 
November 15, 2022

Observing and Validating Induction heads in SOLU-8l-old

We look for induction heads by feeding in a random sequence of tokens repeated twice and looking for heads that attend from a second copy of a token to the token just after the first copy. This is to test the generality of methods for detecting induction heads by looking at attention scores on a linear scale, which leaves open questions about whether the method itself is reproducible. We make observations about the mean attention score used to determine if an attention head is an induction head in SoLU-8l-old compared to GPT2-small.

By 
Brian Muhia
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

This project is private