AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

This project explores AI hallucinations in healthcare across cross-cultural and linguistic contexts, focusing on English, French, Arabic, and a low-resource language, Ewe. We analyse how large language models like GPT-4, Claude, and Gemini generate and disseminate inaccurate health information, emphasising the challenges faced by low-resource languages.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Ziba Atak

Strengths:

-Transparency and Reproducibility: The methodology is clearly documented, and the study can be easily replicated, which is a significant strength.

-Relevance: The paper addresses a critical issue in AI safety—hallucinations in low-resource languages—with clear implications for healthcare.

-Systematic Approach: The use of a multi-step evaluation framework (factual accuracy, safety recommendations, uncertainty transparency) is well-structured and effective.

-Policy Recommendations: The inclusion of policy recommendations adds practical value to the study.

Areas for Improvement:

-Limitations and Negative Consequences: The paper does not explicitly discuss the limitations of the study or potential negative consequences of its findings. Adding this would strengthen the analysis.

-Mitigation Strategies: While the paper investigates risks, it does not propose concrete solutions or mitigation strategies. Including this would enhance its impact.

-Discussion and Conclusion: The discussion section is brief and does not explore future implications, next steps, or shortcomings. Expanding this would provide a more comprehensive conclusion.

-Comparative Analysis: Including a more thorough comparison between high-resource and low-resource languages would add depth to the analysis.

Suggestions for Future Work:

-Conduct larger-scale experiments with more languages and questions to validate the findings and improve generalizability.

-Explore mitigation strategies for hallucinations and biases in low-resource languages.

-Investigate the framework’s performance in diverse cultural and regulatory contexts.

-Address practical constraints and limitations in future iterations of the study.

Anna Leshinskaya

This project seeks to assess risks from hallucination in healthcare settings as it varies by languages (those of "low" vs "high" resource). This is an important topic with clear safety implications for detecting hallucinations in impactful settings. The cross lingual checker is a really neat tool with practical utility.

I have several suggestions for how to improve the reported work.

- the introduction refers to psychology and cognition but it is unclear how those connect to what is studied here

- a definition of a 'high resource' language should be provided (is it about of training data, or something else?)

- the question set is relatively small; it could also be more related to naturalistic queries used in healthcare settings

- while hallucination is well defined for factual questions, reasoning and cultural specificity are much more subjective and therefore seems like a very different construct. Perhaps, alignment rather than hallucination. But those items should not be interpreted as reflecting hallucination.

- it should be described how the test items were selected and validated

- for LLM response ratings, the number of raters, the instructions given to them, and their inter-rated agreement should be reported

- analyses (e.g, comparisons of models) should be performed with statistical tests.

- Plots should show aggregated data over specific prompts to better reveal trends in the data. Bars for different languages should be presented side by side for better comparison. Right now the differences are difficult to see.

- Data for English and French not shown.

Bessie O’Dell

Thank you for submitting your work on AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages - it was interesting to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- Your paper quickly outlines how this project contributes to advancing AI safety.

- The paper is interesting, timely and topical, and focused on an under-researched area

2. Areas for improvement:

- It would be helpful if you defined/ expanded upon some of the key terms that you are using (e.g., global catastrophic risks; AI hallucinations), as well as contexts (e.g., ‘AI-driven healthcare decisions’).

- A clearer threat model and examples would greatly strengthen the paper. E.g., you mention that ‘AI hallucinations in healthcare, which poses significant risks to global health security’ - how/ in what ways? Which areas of healthcare? Could you provide citations for the results/ implications you mention under Threat Model and Safety Implications (p.2)

- Some sentences need to be expanded - e.g., ‘Existing research on AI hallucinations often focuses on technical aspects’ - what does this mean? Technical aspects of what?

- The methods section could be a bit clearer, which would help anyone looking to replicate your methods.

Cite this work

@misc {

title={

@misc {

},

author={

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.