Apart News: Finn, Cyber Offense & Johns Hopkins

Apart News: Finn, Cyber Offense & Johns Hopkins

Apart News is our newsletter to keep you up-to-date.

October 27, 2024
October 25, 2024

This week's edition of Apart News introduces our new AI Safety Startup project had its first roundtable with would-be founders, we share our work on cyber offense evaluations for superintelligent AI, and our AI Policy Hackathon is finally kicking off this weekend.

Dear Apart Community,

Welcome to our newsletter - Apart News!

At Apart Research there is so much brilliant research, great events, and countless community updates to share.

This week's edition of Apart News introduces our new AI Safety Startup project which had its first roundtable with would-be founders, we share our work on cyber offense evaluations for superintelligent AI, and our AI Policy Hackathon is finally kicking off this weekend.

Inaugural AI Safety Startup Roundtable

Finn Metz, of Apart Research, has been working hard on our ​AI Safety Founders​ project. Ensuring that advanced AI systems behave safely and benevolently is one of the most important challenges of our time. We believe that it takes ambitious entrepreneurs to solve problems in the real world and ensure a safe future for everyone.

AI Safety Founders had its first roundtable this week and will go from strength to strength through the coming months. Want to get involved?

Fill out our interest form ​here​. Sign up to the newsletter to keep abreast with all the events ​here​. Follow all their work on ​LinkedIn​, too.

Cyber Evaluations for Superintelligent AIs

We believe that a superintelligent AI performing autonomous cyber operations would prove a large risk for humanity.

That is why Apart Research created 3CB - a new benchmark designed to test the offensive cyber capabilities of LLM agents.

LLMs with strong offensive skills could be weaponized, potentially leading to large-scale cyber incidents, like infrastructure disruption or data theft. And so, it is crucial we develop robust evaluation tools to prevent catastrophic misuse. Read the paper ​here​.

AI Policy Hackathon this Weekend

Our AI Policy ​Hackathon​ is happening this weekend in Washington DC, with the Johns Hopkins Institute for Assured Autonomy! There is still time to sign up ​here​.

We thought we’d take the time here to introduce our brilliant Speakers and Judges below.Our Keynote Speaker is Gabriella Waters, Director of the Cognitive & Neurodiversity AI (CoNA) Lab and a Research Associate at ​National Institute of Standards and Technology (NIST)​.

We also have ​Dr. Anna Broughel​ who works on Energy Transition policy at ​The Johns Hopkins University​ Institute for Assured Autonomy (IAA) who will be speaking to participants over the weekend.

Next we have the CEO of ​Cognitive Insights​, ​Monica Lopez, PhD​, also giving a talk to our participants. Finally, we have ​James Bellingham​, Executive Director of ​JHU Institute for Assured Autonomy​.

There is still time to sign up if you haven't already - you won't want to miss this one! Come work with us on AI policy for a safer and more beneficial future for humanity.

Opportunities

  • Sign up ​here​ for our Reprogramming AI Models Hackathon next month.
  • Luleå University of Technology (LTU) is ​offering​ scholarships for two Postdoctoral Fellows in the Machine Learning group, focusing on Sustainable Machine Learning.

Have a great week and let’s keep working towards safe AI.

‘We are an AI safety lab - our mission is to ensure AI systems are safe and beneficial.’

Dive deeper