Submission Requirements
Each team should submit a research paper that includes:
- Project title and team members
- Executive summary (max 250 words)
- Introduction and problem statement
- Methodology and approach
- Results and analysis
- Discussion of implications for AI interpretability
- Conclusion and future work
- References
Additionally, teams should provide:
- A link to their code repository (e.g., GitHub)
- Any demo materials or visualizations (if applicable)
Evaluation Criteria
Submissions will be judged based on the following criteria:
- Interpretability Advancement
- Does the project contribute to the field of AI interpretability?
- Does it provide new insights into understanding or steering AI model behavior?
- How well does it align with the hackathon's focus on reprogramming AI models?
- Research Quality
- How original and innovative is the approach?
- Does it present novel ideas or combine existing techniques in unique ways?
- Do we expect the results to generalize beyond the specific case(s) presented in the submission?
- Technical Implementation
- How well is the project executed from a technical standpoint?
- Is the code well-structured, documented, and reproducible?
- How effectively does it utilize Goodfire's SDK/API and other provided resources?
- Presentation and Communication
- How clearly and effectively is the research presented in the paper?
- Quality of visualizations and demos (if applicable)
- Clarity of methodology explanation and results interpretation