Nov 25, 2024
Analyzing Dataset Bias with SAEs
Nick Jiang, Joseph Tey
We use SAEs to study biases in datasets.
Mateusz Dziemian
I think that using SAE features to find different biases and information about a dataset seems like a worthwhile direction. However, I'm struggling to understand what each of the things you did are trying to achieve. I think that this comes from using too many methods that make understanding the end goal of the final result blurry.
Jason Schreiber
I really like the application idea here. It would be amazing to understand better the downstream implications of some of the biases identified so far on model performance and really work out the threat model being addressed here in detail. This seems worthy of followup work.
Simon Lermen
This paper is about identifying issues in the quality of pre-training datasets. They look for features that light up for certain classes, such as spam or buggy code, using contrastive search. They then see if these features are really meaningfully. It maybe should have been causally instead of casually: "causally ties a predicted output with activated features." The basic idea seems to be: We have two classes of text, buggy code vs safe code, now we use contrastive search to find features that separate them. After this step, we try to figure out exactly what those features fire for; potentially, we may find features that shouldn't be there. The imagined success could look like this: we train the model on a dataset in which there is some spurious correlation between buggy code and something else. (For example, imagine a programmer in a company creates a lot of buggy code and also has a habit of using a certain library a lot. A spurious correlation could cause models to flag code including this library.) We may want to remove either this data or the feature. It seems that they haven't or don't explain a process to potentially automatically detect such correlations. For me, it is, however, not clear that there really is an established problem here. Could be good to at least demonstrate or cite issues with spurious correlations in current models unless I missed this.
Cite this work
@misc {
title={
@misc {
},
author={
Nick Jiang, Joseph Tey
},
date={
11/25/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}