Mar 10, 2025

AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

This project explores AI hallucinations in healthcare across cross-cultural and linguistic contexts, focusing on English, French, Arabic, and a low-resource language, Ewe. We analyse how large language models like GPT-4, Claude, and Gemini generate and disseminate inaccurate health information, emphasising the challenges faced by low-resource languages.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Strengths:

-Transparency and Reproducibility: The methodology is clearly documented, and the study can be easily replicated, which is a significant strength.

-Relevance: The paper addresses a critical issue in AI safety—hallucinations in low-resource languages—with clear implications for healthcare.

-Systematic Approach: The use of a multi-step evaluation framework (factual accuracy, safety recommendations, uncertainty transparency) is well-structured and effective.

-Policy Recommendations: The inclusion of policy recommendations adds practical value to the study.

Areas for Improvement:

-Limitations and Negative Consequences: The paper does not explicitly discuss the limitations of the study or potential negative consequences of its findings. Adding this would strengthen the analysis.

-Mitigation Strategies: While the paper investigates risks, it does not propose concrete solutions or mitigation strategies. Including this would enhance its impact.

-Discussion and Conclusion: The discussion section is brief and does not explore future implications, next steps, or shortcomings. Expanding this would provide a more comprehensive conclusion.

-Comparative Analysis: Including a more thorough comparison between high-resource and low-resource languages would add depth to the analysis.

Suggestions for Future Work:

-Conduct larger-scale experiments with more languages and questions to validate the findings and improve generalizability.

-Explore mitigation strategies for hallucinations and biases in low-resource languages.

-Investigate the framework’s performance in diverse cultural and regulatory contexts.

-Address practical constraints and limitations in future iterations of the study.

This project seeks to assess risks from hallucination in healthcare settings as it varies by languages (those of "low" vs "high" resource). This is an important topic with clear safety implications for detecting hallucinations in impactful settings. The cross lingual checker is a really neat tool with practical utility.

I have several suggestions for how to improve the reported work.

- the introduction refers to psychology and cognition but it is unclear how those connect to what is studied here

- a definition of a 'high resource' language should be provided (is it about of training data, or something else?)

- the question set is relatively small; it could also be more related to naturalistic queries used in healthcare settings

- while hallucination is well defined for factual questions, reasoning and cultural specificity are much more subjective and therefore seems like a very different construct. Perhaps, alignment rather than hallucination. But those items should not be interpreted as reflecting hallucination.

- it should be described how the test items were selected and validated

- for LLM response ratings, the number of raters, the instructions given to them, and their inter-rated agreement should be reported

- analyses (e.g, comparisons of models) should be performed with statistical tests.

- Plots should show aggregated data over specific prompts to better reveal trends in the data. Bars for different languages should be presented side by side for better comparison. Right now the differences are difficult to see.

- Data for English and French not shown.

Thank you for submitting your work on AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages - it was interesting to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- Your paper quickly outlines how this project contributes to advancing AI safety.

- The paper is interesting, timely and topical, and focused on an under-researched area

2. Areas for improvement:

- It would be helpful if you defined/ expanded upon some of the key terms that you are using (e.g., global catastrophic risks; AI hallucinations), as well as contexts (e.g., ‘AI-driven healthcare decisions’).

- A clearer threat model and examples would greatly strengthen the paper. E.g., you mention that ‘AI hallucinations in healthcare, which poses significant risks to global health security’ - how/ in what ways? Which areas of healthcare? Could you provide citations for the results/ implications you mention under Threat Model and Safety Implications (p.2)

- Some sentences need to be expanded - e.g., ‘Existing research on AI hallucinations often focuses on technical aspects’ - what does this mean? Technical aspects of what?

- The methods section could be a bit clearer, which would help anyone looking to replicate your methods.

Cite this work

@misc {

title={

AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages

},

author={

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.