AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

This project explores AI hallucinations in healthcare across cross-cultural and linguistic contexts, focusing on English, French, Arabic, and a low-resource language, Ewe. We analyse how large language models like GPT-4, Claude, and Gemini generate and disseminate inaccurate health information, emphasising the challenges faced by low-resource languages.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Ziba Atak

Strengths:

-Transparency and Reproducibility: The methodology is clearly documented, and the study can be easily replicated, which is a significant strength.

-Relevance: The paper addresses a critical issue in AI safety—hallucinations in low-resource languages—with clear implications for healthcare.

-Systematic Approach: The use of a multi-step evaluation framework (factual accuracy, safety recommendations, uncertainty transparency) is well-structured and effective.

-Policy Recommendations: The inclusion of policy recommendations adds practical value to the study.

Areas for Improvement:

-Limitations and Negative Consequences: The paper does not explicitly discuss the limitations of the study or potential negative consequences of its findings. Adding this would strengthen the analysis.

-Mitigation Strategies: While the paper investigates risks, it does not propose concrete solutions or mitigation strategies. Including this would enhance its impact.

-Discussion and Conclusion: The discussion section is brief and does not explore future implications, next steps, or shortcomings. Expanding this would provide a more comprehensive conclusion.

-Comparative Analysis: Including a more thorough comparison between high-resource and low-resource languages would add depth to the analysis.

Suggestions for Future Work:

-Conduct larger-scale experiments with more languages and questions to validate the findings and improve generalizability.

-Explore mitigation strategies for hallucinations and biases in low-resource languages.

-Investigate the framework’s performance in diverse cultural and regulatory contexts.

-Address practical constraints and limitations in future iterations of the study.

Anna Leshinskaya

This project seeks to assess risks from hallucination in healthcare settings as it varies by languages (those of "low" vs "high" resource). This is an important topic with clear safety implications for detecting hallucinations in impactful settings. The cross lingual checker is a really neat tool with practical utility.

I have several suggestions for how to improve the reported work.

- the introduction refers to psychology and cognition but it is unclear how those connect to what is studied here

- a definition of a 'high resource' language should be provided (is it about of training data, or something else?)

- the question set is relatively small; it could also be more related to naturalistic queries used in healthcare settings

- while hallucination is well defined for factual questions, reasoning and cultural specificity are much more subjective and therefore seems like a very different construct. Perhaps, alignment rather than hallucination. But those items should not be interpreted as reflecting hallucination.

- it should be described how the test items were selected and validated

- for LLM response ratings, the number of raters, the instructions given to them, and their inter-rated agreement should be reported

- analyses (e.g, comparisons of models) should be performed with statistical tests.

- Plots should show aggregated data over specific prompts to better reveal trends in the data. Bars for different languages should be presented side by side for better comparison. Right now the differences are difficult to see.

- Data for English and French not shown.

Bessie O’Dell

Thank you for submitting your work on AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages - it was interesting to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- Your paper quickly outlines how this project contributes to advancing AI safety.

- The paper is interesting, timely and topical, and focused on an under-researched area

2. Areas for improvement:

- It would be helpful if you defined/ expanded upon some of the key terms that you are using (e.g., global catastrophic risks; AI hallucinations), as well as contexts (e.g., ‘AI-driven healthcare decisions’).

- A clearer threat model and examples would greatly strengthen the paper. E.g., you mention that ‘AI hallucinations in healthcare, which poses significant risks to global health security’ - how/ in what ways? Which areas of healthcare? Could you provide citations for the results/ implications you mention under Threat Model and Safety Implications (p.2)

- Some sentences need to be expanded - e.g., ‘Existing research on AI hallucinations often focuses on technical aspects’ - what does this mean? Technical aspects of what?

- The methods section could be a bit clearer, which would help anyone looking to replicate your methods.

Cite this work

@misc {

title={

AI Hallucinations in Healthcare: Cross-Cultural and Linguistic Risks of LLMs in Low-Resource Languages

},

author={

Peace Silly, Zineb Ibnou Cheikh, Gracia Kaglan

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.