This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Deception Detection Hackathon: Preventing AI deception
660d65646a619f5cf53b1f56
Deception Detection Hackathon: Preventing AI deception
July 1, 2024
Accepted at the 
660d65646a619f5cf53b1f56
 research sprint on 

An Exploration of Current Theory of Mind Evals

We evaluated the performance of a prominent large language model from Anthropic, on the Theory of Mind evaluation developed by the AI Safety Institute (AISI). Our investigation revealed issues with the dataset used by AISI for this evaluation.

By 
John Henderson, Alan Fung, Bachar Moustapha
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission is under review.
Oops! Something went wrong while submitting the form.

This project is private