This paper is about identifying issues in the quality of pre-training datasets. They look for features that light up for certain classes, such as spam or buggy code, using contrastive search. They then see if these features are really meaningfully. It maybe should have been causally instead of casually: "causally ties a predicted output with activated features." The basic idea seems to be: We have two classes of text, buggy code vs safe code, now we use contrastive search to find features that separate them. After this step, we try to figure out exactly what those features fire for; potentially, we may find features that shouldn't be there. The imagined success could look like this: we train the model on a dataset in which there is some spurious correlation between buggy code and something else. (For example, imagine a programmer in a company creates a lot of buggy code and also has a habit of using a certain library a lot. A spurious correlation could cause models to flag code including this library.) We may want to remove either this data or the feature. It seems that they haven't or don't explain a process to potentially automatically detect such correlations. For me, it is, however, not clear that there really is an established problem here. Could be good to at least demonstrate or cite issues with spurious correlations in current models unless I missed this.