Sep 1, 2024
Amplified Wise Simulations for Safe Training and Deployment
Chris Leong
Summary
Conflict of interest declaration: I advised Fazl on a funding request he was working on.
Re publishing: This PDF would require further modifications before publication.
I want to train (amplified) imitation agents of people who are wise to provide advice on navigating conflicting considerations when figuring out how to train and deploy AI safely.
Path to Impact: Train wise AI advisors -> organisations make better decisions about how to train and deploy AI -> safer AGI -> better outcomes for humanity
What is wisdom? Why focus on increasing wisdom? See image
Why use amplified imitation learning?
Attempting to train directly on wisdom suffers from the usual problems of the optimisation algorithm adversarially leveraging your blind spots, but worse because wisdom is an especially fuzzy concept.
Attempting to understand wisdom from a principled approach and build wise AI directly would require at least 50 years and iteration through multiple paradigms of research.
In contrast, if our objective is to imitation folk who are wise, we have a target that we can optimise hard on. Instead of using reinforcment learning to go beyond human level, we use amplification techniques like debate or iterated amplification.
How will these agents advise on decisions?
The humans will ultimately make the decisions. The agents don't have to directly tell the humans what to do, they simply have to inspire the humans to make better decisions. I expect that these agents will be most useful in helping humans figuring out how to navigate conflicting principles or frameworks.
Cite this work:
@misc {
title={
Amplified Wise Simulations for Safe Training and Deployment
},
author={
Chris Leong
},
date={
9/1/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}