Jul 29, 2024

LLM Research Collaboration Recommender

David McSharry

A tool that searches for other researchers with similar research interests/complementary skills to your own to make finding a high-quality research collaborator more likely.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

I like the idea of the project, and I like how references are provided for how the skills and research interests can be complementary. I would like to see more features being introduced that aim to reduce the number of steps the user has to take from getting a recommendation to starting a collaboration.

This is a cool project, I like that it could be beneficial for all researchers, rather than only more junior ones. I like the interface is clean and intuitive and builds off the existing tag system on LW. Have you considered trying to add this as a feature to LW? There could be some nice integrations, for example users could mark themselves as open or closed to new collaborators, which would help improve the likelihood of recommendations leading to collaborations.

Love this and could see a version of this being extremely useful for AIS researchers. Would be great to reach out to LW or AF and talk about whether they’d be up for testing a prototype on their page?

Love this! It’s a tool that I think could be implemented in the entire research pipeline, where it prompted when a researcher asks for a critique of an agenda or grant proposal (the LLM basically pro-actively suggests some people the author should consider reaching out to in order to collaborate on the project or ask for feedback; you could even have a pre-written message to reduce friction to send potential collaborators a message).I think what would make this project work exceptionally well is to integrate it into already-existing things that researchers in the alignment ecosystem do so that they don’t even have to remember to seek out collaborators.In addition, I think it would be beneficial to add people who are outside of LW (authors of relevant papers, for example) in order to broaden the search and improve cross-pollination and strength-matching. It could grab their email and have a pre-written email for the authors to lower friction.People who actively want to show they are looking for collaborators could have a specific account bio that they are prompted to update periodically to say what they are currently interested in. You can use an LLM to create a pre-written bio based on their work to help them get started and then they can edit how they like.Overall, thank you for getting started on building this!PS: You could use the Alignment Research Dataset to get around the rate limit.

Cite this work

@misc {

title={

LLM Research Collaboration Recommender

},

author={

David McSharry

},

date={

7/29/24

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.