This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Interpretability Hackathon
Accepted at the 
Interpretability Hackathon
 research sprint on 
November 15, 2022

Neurons and Attention Heads that Look for Sentence Structure in GPT2

GPT2 has great capabilities with respect to punctuation, grammar and sentence structure. We set out to specifically investigate how GPT2 accomplishes accurate capitalization of words after full stops, question marks and exclamation points. This was accomplished by analyzing both neurons and attention heads for a wide variety of inputs and a couple of models. We were able to narrow down to 2 attention heads of interest, including one that specifically focuses on punctuation marks that would have to be followed by capitalized letters.

By 
Harvey Mannering, James Harding, Praveen Selvaraj
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

This project is private