Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark

CounterFact+ extends the CounterFact benchmark with a dynamic component and a KL divergence-based metric. Model editing techniques, aimed at mitigating false associations in LLMs, suffer from low specificity and introduce unwanted side effects. Robust benchmarks are crucial to identify and mitigate these issues.

Recent model editing techniques promise to mitigate the problem of memorizing false or outdated associations during LLM training. However, we show that these techniques can introduce large unwanted side effects which are not detected by existing specificity benchmarks. We extend the existing CounterFact benchmark to include a dynamic component and dub our benchmark CounterFact+. Additionally, we extend the metrics used for measuring specificity by a principled KL divergence-based metric. We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity. Our findings highlight the need for improved specificity benchmarks that identify and prevent unwanted side effects.

An interview with

"
Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark
" was written by

Author contribution

No items found.

Citation

Send feedback

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Media kit

Quotes

No items found.

All figures

No items found.