Jan 20, 2025
Cite2Root
Dong Chen
Regain information autonomy by bringing people closer to the source of truth.
Edward Yee
An issue that is in the market, but it's not clear how scalable the solution is. Go to market strategy isn't clear either. It is also unclear if they're able to identify hallucinations or just works as a "reverse" perplexity or a wrapper towards something that's able to solve some of the challenges around . This does not tackle AI safety risks as traditionally understood, but is definitely a pertinent area.
Pablo Sanzo
I enjoyed the thorough introduction to the problem space, and to recent innovations like Takehiko et .al When ti comes to the solution, the phases of production are clearly defined. I couldn't dive deeper into the MVP because the Github repo seems broken (https://github.com/Piswearpants/cite2root).
A proposal for the team is to also consider how to productize the solution, how to put in the market and design dynamics and incentives for its adoption.
Also ti reads as if the solution focuses on validating existing citations; what about identifying claims that require a citation (but don't have one yet)? Wikipedia worked on something similar: https://meta.wikimedia.org/wiki/Future_Audiences/Experiment:Citation_Needed
Michaël Trazzi
This approach currently doesn't seem to apply enough to core AI risk challenges, and it's unclear given the lack of empirical results given in this paper if things will scale or if this intervention actually works. The concrete tooling is appreciated though.
Cite this work
@misc {
title={
@misc {
},
author={
Dong Chen
},
date={
1/20/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}