Dear Apart Community,
Welcome to our newsletter - Apart News!
At Apart Research there is so much brilliant research, great events, and countless community updates to share.
In this week's edition of Apart News, we are super excited to announce our brand new AI Safety program: Apart Lab Studio. And of course, it wouldn't be an edition of Apart News if we didn't have a new paper to share, too.
Apart Lab Studio
Our new program is designed to bridge the gap between weekend hackathon projects and a fully-fledged AI Safety research career. Helping participants to realize the impact of their hackathon work for their own career as well as for the wider world.
This innovative 8-week program offers promising researchers worldwide the opportunity to develop their ideas and test their research fit, regardless of their current location or employment status. It is especially for those who want to explore their ideas and potential fit for research:
- While polishing your hackathon work, and ideating on impactful research directions, participants will learn from experienced colleagues in AI safety.
- As a member of Apart Lab, they will have access to our resources and be part of our global community.
- If the participant’s work here is a great fit for our fellowship - which offers a deeper dive into your topic - they may be invited to our Apart Lab Fellowship program at the culmination.
Read our blog on the topic. And our Co-Director, Jason, has a brilliant thread, too. And one from Apart!
Why now?
The AI safety field moves quickly, and timing is crucial. We’ve seen many brilliant and timely ideas emerge during our weekend research hackathons.
However, these insights often require rapid follow-up to maximize their impact. Apart Lab Studio was created to address this need, providing immediate expert support to selected teams within days of their hackathon participation.
Our hackathons produce an incredible variety of research ideas: Some of these achieve maximum impact as a well-crafted blog post while others are well-placed to turn into months-long research projects.
Apart Lab Studio offers the required flexibility and helps teams identify the optimal format and scope for their work, ensuring that promising ideas receive the right level of support and development.
- Dissemination Phase: Success in research isn’t just about generating ideas and getting results, it’s about communicating your insights effectively. During this phase, participants transform their hackathon projects into polished write-ups and share them with the research community. Participants will benefit from the guidance of and collaboration with experienced researchers to refine promising ideas and help to build what will become invaluable connections within the AI safety community.
- Ideation Phase: With their initial ideas documented and shared, participants then take a step back to explore new possibilities. This phase encourages participants to examine variations of their ideas, discover new research directions, and develop proposals that could attract research funding.
Those participants who a) have promising projects that are a great fit for our lab and b) want a career in AI Safety research may be invited to our Apart Lab Fellowship.
Our traditional Apart Lab has already demonstrated remarkable success. Our fellows have published their research in prestigious venues including NeurIPS, ACL, and ICLR - all starting from weekend hackathon projects.
The new Apart Lab Studio program addresses several critical additional needs in the AI safety research community:
- Timely support: Selected teams receive expert support within days of their hackathon participation, maintaining crucial momentum.
- Flexible involvement: Researchers can explore their potential without committing to a lengthy fellowship upfront.
- Globally accessible: Talented individuals worldwide can participate, regardless of their current location or commitments.
- Tailored assistance: Perhaps most importantly, projects receive tailored support based on their specific needs and potential impact.
Join us!
Ready to begin your AI safety research journey? Start by joining our hackathons - sign up to our newsletter here to stay informed about upcoming events. Or if you have already been involved, maybe forward this newsletter to a friend or two?
Whether your project leads to a compelling blog post, a conference paper, or a full research career - come be a part of our global research community.
For more information about Apart Lab Studio and our other programs, visit apartresearch.com/lab.
Rethinking CyberSecEval Paper
This paper is authored by Suhas Hariharan, Zainab Ali Majid, Jaime Raldua Veuthey, and Jacob Haimes.
The risk posed by cyber-offensive capabilities of AI agents has been consistently referenced - by the National Cyber Security Centre, AI Safety Institute, and frontier labs - as a critical domain to monitor.
A key development in assessing the potential impact of AI agents in the cybersecurity space is the work carried out by Meta, through their CyberSecEval approach. While this work is a useful contribution to a nascent field, there are features that limit its utility.
Exploring the insecure code detection part of Meta’s methodology, detailed in Meta's first paper, they focus on the limitations - using this exploration as a test case for LLM-assisted benchmark analysis. Read the write-up on our website. Paper here.
Opportunities
- Want to work with us on mechanistic interpretability and feature manipulation? By joining our Hackathon on Mechanistic Interpretability with Goodfire AI, you could end up being one of our many Lab Fellows who are working on & publishing AI safety research after participating in our sprints. Sign up here.
Have a great week and let’s keep working towards safe AI.
‘We are an AI safety lab - our mission is to ensure AI systems are safe and beneficial.’