Mar 10, 2025

Medical Agent Controller

Quentin Marquet, Ouafae Moudni, Shakthivel Murugavel, Xavier Charles, Elise Racine

The Medical Agent Controller (MAC) is a multi-agent governance framework designed to safeguard AI-powered medical chatbots by intercepting unsafe recommendations in real time.

It employs a dual-phase approach, using red-team simulations during testing and a controller agent during production to monitor and intervene when necessary.

By integrating advanced medical knowledge and adversarial testing, MAC enhances patient safety and provides actionable feedback for continuous improvement in medical AI systems.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Strengths:

-Comprehensive Literature Review: The paper demonstrates a strong understanding of existing literature, particularly around AI hallucinations, biases, and medical AI risks.

-Novel Methodology: The multi-agent controller framework is innovative and addresses a critical gap in regulating medical AI chatbots.

-Clear Methodology: The experimental design and methodology are well-documented, with detailed explanations of the red team and controller agents.

-Real-World Impact: The framework was tested in a controlled production environment, demonstrating its practical applicability and low-latency performance.

-Societal Relevance: The paper effectively connects AI safety challenges to real-world implications in healthcare, such as patient safety and ethical concerns.

Areas for Improvement:

-Limitations and Negative Consequences: The paper does not explicitly discuss the limitations of the framework or potential negative consequences of its implementation. Adding this would strengthen the analysis.

-Conclusion Expansion: The conclusion could be expanded to include more actionable recommendations and future research directions, particularly around scalability and cross-cultural applicability.

Suggestions for Future Work:

-Conduct larger-scale experiments to validate the findings and improve generalizability.

-Explore cross-disciplinary approaches (e.g., ethics, policy) to broaden the societal impact of the research.

-Investigate the framework’s performance in diverse cultural and regulatory contexts.

Thank you for submitting your work on ‘Medical Agent Controller’ - it was interesting to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- The paper is well referenced and well structured

- This is a topical area, which is in need of more attention from researchers and people in industry

- This appears to be a novel contribution

2. Areas for improvement

- This paper needs to more strongly delineate between ‘AI-driven medical chatbots’ - which are regulated as medical devices in the UK (which isn’t mentioned), and general purpose AI models that aren’t regulated as medical devices. Could you expand upon why ‘many health chatbots remain unclassified as medical devices’? (p.2).

- It is good that you provide a threat model. This could be more fleshed out (e.g., with a Theory of Change, and/or examples), and it would be helpful if you outlined the implications of these threats. E.g., what is the impact/ implication of bias and discrimination?

- More specifics would be better re: methods - e.g., ‘Through thousands of simulated conversations orchestrated by the Red Team Agent’ - how many thousand? What conversations - can you provide an example?

- It would be helpful to understand a bit more about the background of MAC - has it been used elsewhere, etc?

Well written and clear. I like the pragmatic approach of designing a tool that seems like it could be combined with an arbitrary medical chatbot.

I would be even more excited if the project tried some more original approach - I feel a bit like you are playing it safe, sticking to rather established methods and well-known challenges. That said, the problem is real and important and the project seems well executed.

Cite this work

@misc {

title={

Medical Agent Controller

},

author={

Quentin Marquet, Ouafae Moudni, Shakthivel Murugavel, Xavier Charles, Elise Racine

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.