Jun 2, 2025
Reliability Judge: Enhancing LLM Reliability Through Multi-Model Judging
Guido Ernesto Bergman, Tobias Bersia, Emanuel Pablo Ruzak, Gonzalo Agustin Heredia
We present a multi-model judging framework that extends the Think Twice before Trusting (T3) paradigm to enhance answer reliability across large language models (LLMs). Our approach is most suitable for high-stakes contexts (e.g., healthcare, legal, or safety-critical systems), where even minor accuracy gains can have significant consequences. The proposed system uses cross-model evaluation to identify the most accurate and well-justified response from a set of candidates. Each model acts both as an answer generator and as a judge under two modes: as a neutral evaluator or as a participant evaluating its own response. Judgments consider factual correctness, reasoning clarity, and confidence-justification alignment. Our results show that the best judge-based approach outperforms all the monolithic (i.e., single-model) versions. This work contributes a transparent method for selecting high-confidence outputs from black-box models, advancing practical AI reliability and alignment. The code and data are available at https://github.com/GuidoBergman/reliability-judge.
Philip Quirke
Thank you for your submission. It is well written and clear. The results are simple and solid.
In the EO context, the EO implementer gets to select the LLM used as a judge in training. So knowledge of which LLM judges best and whether it can be trusted to judge its own output is useful.
Curt Tigges
Seems like a good way to implement judging, though it's unclear how novel of a finding this is from a safety perspective.
Narmeen
Constructive feedback:
Strength:
Nice problem selection: T3 paradigm for improving performance
Nice ensemble of judges setting compared to monolith/fixed judge across models: Nice setting where the judge varies per model in the model set to prevent preference bias towards a model’s own answers, works on Claude-3.5.Haiku.
Weakness:
Tested on one model and one dataset (limited result)
Future direction:
Could be paired with a couple of judge selection algorithms and more datasets/ types of judges (e.g, COT based judge)
Expert Orchestration: 4
MI: 1
Tech Imp and rep: 3 (Due to results being a bit limited)
Anosha Rahim
This is an important area as your rightly highlight, however, your methodology could use some more work. Why use Claude 3.5 Haiku as opposed to smaller, more specialist models in a specific domain, such as medicine, or legal?
Cite this work
@misc {
title={
@misc {
},
author={
Guido Ernesto Bergman, Tobias Bersia, Emanuel Pablo Ruzak, Gonzalo Agustin Heredia
},
date={
6/2/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}