The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

Reprogramming AI Models Hackathon

--
Signups
--
Entries
November 22, 2024 4:00 PM
 to
November 25, 2024 3:00 AM
 (UTC)
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
November 22, 2024
 and 
November 25, 2024

Join us for a groundbreaking exploration of AI model internals, where we'll dive deep into mechanistic interpretability and feature manipulation. In partnership with Goodfire, we're bringing you unprecedented access to state-of-the-art tools for understanding and steering AI behavior.

Whether you're an AI researcher, a curious developer, or passionate about making AI systems more transparent and controllable, this hackathon is for you. As a participant, you will:

  • Collaborate with experts to create novel AI observability tools
  • Learn about mechanistic interpretability from industry leaders
  • Contribute to solving real-world challenges in AI safety and reliability
  • Compete for prizes and the opportunity to influence the future of AI development

Register now and be part of the movement towards more transparent, reliable, and beneficial AI systems. We provide access to Goodfire's SDK/API and research preview playground, enabling participation regardless of prior experience with AI observability.

Why This Matters

As AI models become more powerful and widespread, understanding their internal mechanisms isn't just academic curiosity—it's crucial for building reliable, controllable AI systems. Mechanistic interpretability gives us the tools to peek inside these "black boxes" and understand how they actually work, neuron by neuron and feature by feature.

What You'll Get

  • Exclusive Access: Use Goodfire's API to access an interpretable 8B or 70B model with efficient inference.
  • Cutting-Edge Tools: Experience Goodfire's SDK/API for feature steering and manipulation
  • Advanced Capabilities: Work with conditional feature interventions and sophisticated development flows
  • Free Resources: Compute credits for every team to ensure you can pursue ambitious projects
  • Expert Guidance: Direct mentorship from industry leaders throughout the weekend

Project Tracks

1. Feature Investigation

  • Map and analyze feature phenomenology in large language models
  • Discover and validate useful feature interventions
  • Research the relationship between feature weights and intervention success
  • Develop metrics for intervention quality assessment

2. Tooling Development

  • Build tools for automated feature discovery
  • Create testing frameworks for intervention reliability
  • Develop integration tools for existing ML frameworks
  • Improve auto-interpretation techniques

3. Visualization & Interface

  • Design intuitive visualizations for feature maps
  • Create interactive tools for exploring model internals
  • Build dashboards for monitoring intervention effects
  • Develop user interfaces for feature manipulation

4. Novel Research

  • Investigate improvements to auto-interpretation
  • Study feature interaction patterns
  • Research intervention transfer between models
  • Explore new approaches to model steering

Why Goodfire's Tools?

While participants are welcome to use their existing setups, Goodfire's API brings exceptional value to this hackathon as a primary option for participants.

Goodfire provides:

  • Access to a 70B parameter model via API (with efficient inference)
  • Feature steering capabilities made simple through the SDK/API
  • Advanced development workflows including conditional feature interventions

The hackathon serves as a unique opportunity for Goodfire to gather valuable feedback from the developer community on their API/SDK. To ensure all participants can pursue ambitious research projects without constraints, Goodfire is providing free compute credits to every team.

Previous Participant Experiences

"I learned so much about AI Safety and Computational Mechanics. It is a field I have never heard of, and it combines two of my interests - AI and Physics. Through the hackathons, I gained valuable connections and learned a lot from researchers with extensive experience." - Doroteya Stoyanova, Computer Vision Intern

Speakers & Collaborators

Tom McGrath

Chief Scientist at Goodfire, previously Senior Research Scientist at Google DeepMind, where he co-founded the interpretability team
Organiser and Judge

Neel Nanda

Team lead for the mechanistic interpretability team at Google Deepmind and a prolific advocate for open source interpretability research.
Speaker & Judge

Dan Balsam

CTO at Goodfire, previously Founding Engineer and Head of AI at RippleMatch. Goodfire makes Interpretability products for safe and reliable generative AI models.
Organiser and Mentor

Myra Deng

Founding PM at Goodfire, Stanford MBA and MS CS graduate previously building modeling platforms at Two Sigma
Organiser

Archana Vaidheeswaran

Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Organizer

Jaime Raldua

Jaime has 8+ years of experience in the tech industry. Started his own data consultancy to support EA Organisations and currently works at Apart Research as Research Engineer.
Organiser and Judge

Joseph Bloom

Joseph co-founded Decode Research, a non-profit organization aiming to accelerate progress in AI safety research infrastructure, and is a mechanistic interpretability researcher.
Speaker

Alana X

Member of Technical Staff at Magic, leading initiatives on model evaluations. Previously a Research Intern at METR
Judge

Callum McDougall

ARENA Director and SERI-MATS alumnus specializing in mechanistic interpretability and AI alignment education
Speaker

Mateusz Dziemian

Member of Technical Staff at Gray Swan AI, recently worked on a U.K AISI collaboration. Previous participant in an Apart Sprint which ended in a NeurIPS workshop paper
Judge

To ensure you're well-equipped for the Reprogramming AI Models Hackathon, we've compiled a set of resources to support your participation:

  1. Goodfire's SDK/API with hosted inference: Your primary toolkit for the hackathon. Familiarize yourself with our framework for understanding and modifying AI model behavior.
  2. Goodfire's research preview playground: A sandbox environment to experiment with AI model internals.
  3. Tutorial: Visualizing AI Model Internals: Watch this video to understand how to use Goodfire's tools to map and visualize AI model behavior.

  1. The Cognitive Revolution Podcast - Episode on Interpretability. n this episode of The Cognitive Revolution, we delve into the science of understanding AI models' inner workings, recent breakthroughs, and the potential impact on AI safety and control
  1. Auto-interp Paper (https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html): This paper applies automation to the problem of scaling an interpretability technique to all the neurons in a large language model.

📍 Registered jam sites

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

Reprogramming AI Models Hackathon: Edinburgh

Reprogramming AI Models Hackathon: Edinburgh

AISIG - Decode AI's Black Box & Engineer Model Behavior Hackathon

Join us for the Hackathon in Hereplein 4, 9711GA, Groningen!

AISIG - Decode AI's Black Box & Engineer Model Behavior Hackathon

Reprogramming AI Models Hackathon

This is a collaboration between Warwick AI and Warwick Effective Altruism. We will be hosting groups that wish to participate in the hackathon for the weekend.

Reprogramming AI Models Hackathon

Reprogramming AI Models Hackathon: CAISH

Cambridge hub for hosting the Reprogramming AI hackathon. Office available with monitors and snacks!

Reprogramming AI Models Hackathon: CAISH

Reprogramming AI Models Hackathon: EPFL hub

A local hub for the hackathon on the EPFL campus (luma coming soon). We will provide a room, snacks and drinks for the participants.

Reprogramming AI Models Hackathon: EPFL hub

🏠 Register a location

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received! Your event will show up on this page.
Oops! Something went wrong while submitting the form.

📣 Social media images and text snippets

No media added yet
No text snippets added yet


Use this template for your submission [Required]

Submission Requirements

Each team should submit a research paper that includes:

  1. Project title and team members
  2. Executive summary (max 250 words)
  3. Introduction and problem statement
  4. Methodology and approach
  5. Results and analysis
  6. Discussion of implications for AI interpretability
  7. Conclusion and future work
  8. References

Additionally, teams should provide:

  • A link to their code repository (e.g., GitHub)
  • Any demo materials or visualizations (if applicable)

Evaluation Criteria

Submissions will be judged based on the following criteria:

  1. Interpretability Advancement
    • Does the project contribute to the field of AI interpretability?
    • Does it provide new insights into understanding or steering AI model behavior?
    • How well does it align with the hackathon's focus on reprogramming AI models?
  2. Research Quality
    • How original and innovative is the approach?
    • Does it present novel ideas or combine existing techniques in unique ways?
    • Do we expect the results to generalize beyond the specific case(s) presented in the submission?
  3. Technical Implementation
    • How well is the project executed from a technical standpoint?
    • Is the code well-structured, documented, and reproducible?
    • How effectively does it utilize Goodfire's SDK/API and other provided resources?
  4. Presentation and Communication
    • How clearly and effectively is the research presented in the paper?
    • Quality of visualizations and demos (if applicable)
    • Clarity of methodology explanation and results interpretation

Key Points to Remember:

  • Focus on creating tools that provide meaningful insights into model behavior or enable precise modifications.
  • Consider how your approach might scale to larger, more complex AI systems.
  • Ethical considerations should be at the forefront of your design process.

💡 Frequently Asked Questions

General Questions

Q: What is the timeline for the hackathon?

A: The hackathon runs from Friday, 22nd November, through Monday, 25th November(3 AM UTC). There will be two brainstorming sessions (Tuesday and Thursday), with API access beig granted starting Thursday morning.

Q: What team size is recommended?

A: Teams should be 2-4 people, with a maximum of 5. While individual submissions are allowed, team participation is encouraged for better project outcomes.

Q: How do we get access to the API?

A:

  1. Form your team
  2. Have the team lead fill out the registration form
  3. Receive API access credentials (starting Thursday morning 21st November)

Technical Questions

Q: Which models are available through the SDK?

A: The SDK provides access to:

  • Llama 3 8B
  • Llama 3 70B

Q: What are the main capabilities of the Goodfire SDK?

A: The SDK provides:

  • SAE feature inspection
  • Feature interventions
  • Feature search functionality
  • Contrast analysis between datasets
  • Feature activation analysis

Q: Are there rate limits for the API?

A: Yes, there are rate limits, but they can be increased upon request with justification. The SDK implements exponential backoff to help manage these limits.

Q: Can we access raw neuron activations?

A: No, the SDK only provides access to feature-level information, not raw neuron activations.

Q: Can we combine Goodfire's SDK with other tools?

A: Yes, you're encouraged to use any additional tools alongside the SDK to achieve your project goals.

Feature-Related Questions

Q: How can we find relevant features for our task?

A: The SDK provides several methods:

  1. Search: Query features using string searches
  2. Inspect: Analyze feature activations for specific inputs
  3. Contrast: Compare features between different datasets

Q: Can we modify feature activations?

A: Yes, you can perform feature interventions to modify model behavior, including conditional interventions.

Q: Are the SAE features proprietary?

A: Yes, Goodfire's SAE features are proprietary but may be open-sourced in the future.

Support & Resources

Q: Where can we get help during the hackathon?

A: You can:

  1. Use the help desk channel on Discord
  2. Contact the technical support team directly
  3. Attend office hours (Saturday/Sunday)
  4. Participate in hack talks

Q: What if we need increased rate limits?

A: Submit a request through the provided form with justification, and the team will review it case by case.

Q: Are there example notebooks available?

A: Yes, Goodfire provides several example notebooks, including:

Q: How can we submit feature requests or feedback?

A: While immediate changes during the hackathon may not be possible, Goodfire welcomes all feedback and feature requests for future improvements.

Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
Oops! Something went wrong while submitting the form.
No projects submitted yet! Add your project information in the form. We usually see projects submitted quite close to the deadline.