Feb 10, 2023

-

Feb 13, 2023

Online & In-Person

Scale Oversight for Machine Learning Hackathon

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems , and  so much else!

00:00:00:00

00:00:00:00

00:00:00:00

00:00:00:00

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems , and  so much else!

This event is ongoing.

This event has concluded.

24

Sign Ups

Overview

Resources

Overview

Arrow

Hosted by Esben Kran, pseudobison, Zaki, fbarez, ruiqi-zhong · #alignmentjam

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems , and  so much else!

🏆$2,000 on the line

Join the hackathon Discord

Measuring and monitoring safety

To make sure large machine learning models follow what we want them to do, we have to have people monitoring their safety. BUT, it is indeed very hard for just one person to monitor all the outputs of ChatGPT...

The objective of this hackathon is to research scalable solutions to this problem!

  • Can we create good benchmarks that run independently of human oversight?

  • Can we train AI models themselves to find faults in other models?

  • Can we create ways for one human to monitor a much larger amount of data?

  • Can we reduce the misgeneralization of the original model using some novel method?

These are all very interesting questions that we're excited to see your answers for during theses 48 hours

Reading group

Join the Discord above to be a part of the reading group where we read up on the research within scaling oversight! The current pieces are:

24

Sign Ups

Overview

Resources

Overview

Arrow

Hosted by Esben Kran, pseudobison, Zaki, fbarez, ruiqi-zhong · #alignmentjam

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems , and  so much else!

🏆$2,000 on the line

Join the hackathon Discord

Measuring and monitoring safety

To make sure large machine learning models follow what we want them to do, we have to have people monitoring their safety. BUT, it is indeed very hard for just one person to monitor all the outputs of ChatGPT...

The objective of this hackathon is to research scalable solutions to this problem!

  • Can we create good benchmarks that run independently of human oversight?

  • Can we train AI models themselves to find faults in other models?

  • Can we create ways for one human to monitor a much larger amount of data?

  • Can we reduce the misgeneralization of the original model using some novel method?

These are all very interesting questions that we're excited to see your answers for during theses 48 hours

Reading group

Join the Discord above to be a part of the reading group where we read up on the research within scaling oversight! The current pieces are:

24

Sign Ups

Overview

Resources

Overview

Arrow

Hosted by Esben Kran, pseudobison, Zaki, fbarez, ruiqi-zhong · #alignmentjam

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems , and  so much else!

🏆$2,000 on the line

Join the hackathon Discord

Measuring and monitoring safety

To make sure large machine learning models follow what we want them to do, we have to have people monitoring their safety. BUT, it is indeed very hard for just one person to monitor all the outputs of ChatGPT...

The objective of this hackathon is to research scalable solutions to this problem!

  • Can we create good benchmarks that run independently of human oversight?

  • Can we train AI models themselves to find faults in other models?

  • Can we create ways for one human to monitor a much larger amount of data?

  • Can we reduce the misgeneralization of the original model using some novel method?

These are all very interesting questions that we're excited to see your answers for during theses 48 hours

Reading group

Join the Discord above to be a part of the reading group where we read up on the research within scaling oversight! The current pieces are:

24

Sign Ups

Overview

Resources

Overview

Arrow

Hosted by Esben Kran, pseudobison, Zaki, fbarez, ruiqi-zhong · #alignmentjam

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems , and  so much else!

🏆$2,000 on the line

Join the hackathon Discord

Measuring and monitoring safety

To make sure large machine learning models follow what we want them to do, we have to have people monitoring their safety. BUT, it is indeed very hard for just one person to monitor all the outputs of ChatGPT...

The objective of this hackathon is to research scalable solutions to this problem!

  • Can we create good benchmarks that run independently of human oversight?

  • Can we train AI models themselves to find faults in other models?

  • Can we create ways for one human to monitor a much larger amount of data?

  • Can we reduce the misgeneralization of the original model using some novel method?

These are all very interesting questions that we're excited to see your answers for during theses 48 hours

Reading group

Join the Discord above to be a part of the reading group where we read up on the research within scaling oversight! The current pieces are: