Dear Apart Community,
Welcome to our newsletter - Apart News!
At Apart Research there is so much brilliant research, great events, and countless community updates to share. In this week's edition of Apart News we ask just how impactful a donation is to Apart Research and take a look at the ability of LLMs to predict neuroscience results.
How impactful is a donation to us?
As you'll most likely know as an avid reader of Apart News - Apart Research is a non-profit research and community-building lab with a strategic target on high-volume frontier technical research.
Apart is currently raising a round to run the lab throughout 2025 and 2026 but here we'll describe what your marginal donation may enable. Donate to Apart Research.
In just two years, Apart Research has established itself as a unique and efficient part of the AI safety ecosystem. Our research output includes 13 peer-reviewed papers published since 2023 at top venues including NeurIPS, ICLR, ACL, and EMNLP, with six main conference papers and nine workshop acceptances.
Our work has been cited by OpenAI's (now axed) Superalignment team, and our team members have contributed to significant publications like Anthropic's 'Sleeper Agents' paper.
With this track record, we're able to capitalize on our position as an AI safety lab and mobilize our work to impactful frontiers of technical work in governance, research methodology, and AI control.
Besides our ability to accelerate a Lab Fellow's research career at an average direct cost of around $3k, enable research sprint participants for as little as $30, and enable growth at local groups at similar high price/impact ratios, your marginal donation can enable us to run further impactful projects:
- Improved access to our program ($7k-$25k): Professional revamp of our website and documentation would make our programs and research outputs more accessible to talented researchers worldwide.
- Higher conference attendance support ($20k): Currently, we only support one fellow per team to attend conferences. Additional funding would enable a second team member to attend, at approximately $2k per person.
- Improving worldview diversity in AI safety ($10k-$20k): We've been working on all continents now and find a lot of value in our approach to enable international and underrepresented professional talent. With this funding, you would enable more targeted outreach from Apart's side and existing lab members' participation in conferences to discuss and represent AI safety to otherwise underrepresented professional groups.
- Continuing impactful research projects ($15k-$30k): We will be able to extend timely and critical research projects. For instance, we're looking to port our cyber-evaluations work to Inspect, making it a permanent part of UK AISI catastrophic risk evaluations. Our recent paper also finds novel methods to test whether LLMs game public benchmarks and we would like to expand the work to run the same test on other high-impact benchmarks while making the results more accessible.
You'll be supporting a growing organization with the Apart Lab fellowship already doubling from Q1'24 to Q3'24 (17 to 35 fellows) and our research sprints having moved thousands closer to AI safety.
Given current AGI development timelines, the need to scale and improve safety research is urgent. In our view, Apart seems like one of the better investments to reduce AI risk.
If this sounds interesting and you'd like to hear more (or have a specific marginal project you'd like to see happen), our inbox is open. Please think about donating to Apart Research.
What we've been reading
A new paper published in Nature - 'Large language models surpass human experts in predicting neuroscience results' - finds that LLMs can surpass experts in predicting the outcomes of scientific experiments in neuroscience.
'LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts.' They call benchmark for predicting neuroscience results 'BrainBench'.
The paper finds that LLMs 'surpass experts in predicting experimental outcomes. BrainGPT, an LLM [they] tuned on the neuroscience literature, performed better yet. Like human experts, when LLMs indicated high confidence in their predictions, their responses were more likely to be correct, which presages a future where LLMs assist humans in making discoveries.'
At Apart, we see how increasingly powerful AI is getting really good at research assistance. We believe it will also get really good at AI development assistance, specifically. And the development of AI-powered R&D is seeing huge cash injections.
AI safety researchers need to wake up to this and become very creative with AI assisted research to keep up with capabilities, and so this is an area we'd like to see more work go into.
Opportunities
- *Never* miss a Hackathon by keeping up to date here.
- Final call for applications for the UK's AI Safety Institute Fast Grants programme is closing soon! Don’t miss the chance to secure up to £200k in funding to advance critical AI Safety research.
Have a great week and let’s keep working towards safe and beneficial AI.