Mar 10, 2025
AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages
Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan
This project explores AI hallucinations in healthcare across cross-cultural and linguistic contexts, focusing on English, French, Arabic, and a low-resource language, Ewe. We analyse how large language models like GPT-4, Claude, and Gemini generate and disseminate inaccurate health information, emphasising the challenges faced by low-resource languages.
Ziba Atak
Strengths:
-Transparency and Reproducibility: The methodology is clearly documented, and the study can be easily replicated, which is a significant strength.
-Relevance: The paper addresses a critical issue in AI safety—hallucinations in low-resource languages—with clear implications for healthcare.
-Systematic Approach: The use of a multi-step evaluation framework (factual accuracy, safety recommendations, uncertainty transparency) is well-structured and effective.
-Policy Recommendations: The inclusion of policy recommendations adds practical value to the study.
Areas for Improvement:
-Limitations and Negative Consequences: The paper does not explicitly discuss the limitations of the study or potential negative consequences of its findings. Adding this would strengthen the analysis.
-Mitigation Strategies: While the paper investigates risks, it does not propose concrete solutions or mitigation strategies. Including this would enhance its impact.
-Discussion and Conclusion: The discussion section is brief and does not explore future implications, next steps, or shortcomings. Expanding this would provide a more comprehensive conclusion.
-Comparative Analysis: Including a more thorough comparison between high-resource and low-resource languages would add depth to the analysis.
Suggestions for Future Work:
-Conduct larger-scale experiments with more languages and questions to validate the findings and improve generalizability.
-Explore mitigation strategies for hallucinations and biases in low-resource languages.
-Investigate the framework’s performance in diverse cultural and regulatory contexts.
-Address practical constraints and limitations in future iterations of the study.
Anna Leshinskaya
This project seeks to assess risks from hallucination in healthcare settings as it varies by languages (those of "low" vs "high" resource). This is an important topic with clear safety implications for detecting hallucinations in impactful settings. The cross lingual checker is a really neat tool with practical utility.
I have several suggestions for how to improve the reported work.
- the introduction refers to psychology and cognition but it is unclear how those connect to what is studied here
- a definition of a 'high resource' language should be provided (is it about of training data, or something else?)
- the question set is relatively small; it could also be more related to naturalistic queries used in healthcare settings
- while hallucination is well defined for factual questions, reasoning and cultural specificity are much more subjective and therefore seems like a very different construct. Perhaps, alignment rather than hallucination. But those items should not be interpreted as reflecting hallucination.
- it should be described how the test items were selected and validated
- for LLM response ratings, the number of raters, the instructions given to them, and their inter-rated agreement should be reported
- analyses (e.g, comparisons of models) should be performed with statistical tests.
- Plots should show aggregated data over specific prompts to better reveal trends in the data. Bars for different languages should be presented side by side for better comparison. Right now the differences are difficult to see.
- Data for English and French not shown.
Bessie O’Dell
Thank you for submitting your work on AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages - it was interesting to read. Please see below for some feedback on your project:
1. Strengths of your proposal/ project:
- Your paper quickly outlines how this project contributes to advancing AI safety.
- The paper is interesting, timely and topical, and focused on an under-researched area
2. Areas for improvement:
- It would be helpful if you defined/ expanded upon some of the key terms that you are using (e.g., global catastrophic risks; AI hallucinations), as well as contexts (e.g., ‘AI-driven healthcare decisions’).
- A clearer threat model and examples would greatly strengthen the paper. E.g., you mention that ‘AI hallucinations in healthcare, which poses significant risks to global health security’ - how/ in what ways? Which areas of healthcare? Could you provide citations for the results/ implications you mention under Threat Model and Safety Implications (p.2)
- Some sentences need to be expanded - e.g., ‘Existing research on AI hallucinations often focuses on technical aspects’ - what does this mean? Technical aspects of what?
- The methods section could be a bit clearer, which would help anyone looking to replicate your methods.
Cite this work
@misc {
title={
@misc {
},
author={
Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan
},
date={
3/10/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}