Mar 10, 2025

AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

This project explores AI hallucinations in healthcare across cross-cultural and linguistic contexts, focusing on English, French, Arabic, and a low-resource language, Ewe. We analyse how large language models like GPT-4, Claude, and Gemini generate and disseminate inaccurate health information, emphasising the challenges faced by low-resource languages.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Strengths:

-Transparency and Reproducibility: The methodology is clearly documented, and the study can be easily replicated, which is a significant strength.

-Relevance: The paper addresses a critical issue in AI safety—hallucinations in low-resource languages—with clear implications for healthcare.

-Systematic Approach: The use of a multi-step evaluation framework (factual accuracy, safety recommendations, uncertainty transparency) is well-structured and effective.

-Policy Recommendations: The inclusion of policy recommendations adds practical value to the study.

Areas for Improvement:

-Limitations and Negative Consequences: The paper does not explicitly discuss the limitations of the study or potential negative consequences of its findings. Adding this would strengthen the analysis.

-Mitigation Strategies: While the paper investigates risks, it does not propose concrete solutions or mitigation strategies. Including this would enhance its impact.

-Discussion and Conclusion: The discussion section is brief and does not explore future implications, next steps, or shortcomings. Expanding this would provide a more comprehensive conclusion.

-Comparative Analysis: Including a more thorough comparison between high-resource and low-resource languages would add depth to the analysis.

Suggestions for Future Work:

-Conduct larger-scale experiments with more languages and questions to validate the findings and improve generalizability.

-Explore mitigation strategies for hallucinations and biases in low-resource languages.

-Investigate the framework’s performance in diverse cultural and regulatory contexts.

-Address practical constraints and limitations in future iterations of the study.

This project seeks to assess risks from hallucination in healthcare settings as it varies by languages (those of "low" vs "high" resource). This is an important topic with clear safety implications for detecting hallucinations in impactful settings. The cross lingual checker is a really neat tool with practical utility.

I have several suggestions for how to improve the reported work.

- the introduction refers to psychology and cognition but it is unclear how those connect to what is studied here

- a definition of a 'high resource' language should be provided (is it about of training data, or something else?)

- the question set is relatively small; it could also be more related to naturalistic queries used in healthcare settings

- while hallucination is well defined for factual questions, reasoning and cultural specificity are much more subjective and therefore seems like a very different construct. Perhaps, alignment rather than hallucination. But those items should not be interpreted as reflecting hallucination.

- it should be described how the test items were selected and validated

- for LLM response ratings, the number of raters, the instructions given to them, and their inter-rated agreement should be reported

- analyses (e.g, comparisons of models) should be performed with statistical tests.

- Plots should show aggregated data over specific prompts to better reveal trends in the data. Bars for different languages should be presented side by side for better comparison. Right now the differences are difficult to see.

- Data for English and French not shown.

Thank you for submitting your work on AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages - it was interesting to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- Your paper quickly outlines how this project contributes to advancing AI safety.

- The paper is interesting, timely and topical, and focused on an under-researched area

2. Areas for improvement:

- It would be helpful if you defined/ expanded upon some of the key terms that you are using (e.g., global catastrophic risks; AI hallucinations), as well as contexts (e.g., ‘AI-driven healthcare decisions’).

- A clearer threat model and examples would greatly strengthen the paper. E.g., you mention that ‘AI hallucinations in healthcare, which poses significant risks to global health security’ - how/ in what ways? Which areas of healthcare? Could you provide citations for the results/ implications you mention under Threat Model and Safety Implications (p.2)

- Some sentences need to be expanded - e.g., ‘Existing research on AI hallucinations often focuses on technical aspects’ - what does this mean? Technical aspects of what?

- The methods section could be a bit clearer, which would help anyone looking to replicate your methods.

Cite this work

@misc {

title={

AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages

},

author={

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.