Papers
arxiv:2412.14140

GLIDER: Grading LLM Interactions and Decisions using Explainable Ranking

Published on Dec 18, 2024
Authors:
,
,
,
,

Abstract

The LLM-as-judge paradigm is increasingly being adopted for automated evaluation of model outputs. While LLM judges have shown promise on constrained evaluation tasks, closed source LLMs display critical shortcomings when deployed in real world applications due to challenges of fine grained metrics and explainability, while task specific evaluation models lack cross-domain generalization. We introduce GLIDER, a powerful 3B evaluator LLM that can score any text input and associated context on arbitrary user defined criteria. GLIDER shows higher Pearson's correlation than GPT-4o on FLASK and greatly outperforms prior evaluation models, achieving comparable performance to LLMs 17x its size. GLIDER supports fine-grained scoring, multilingual reasoning, span highlighting and was trained on 685 domains and 183 criteria. Extensive qualitative analysis shows that GLIDER scores are highly correlated with human judgments, with 91.3% human agreement. We have open-sourced GLIDER to facilitate future research.

Community

Hello, congrats for the work.
Is there any reason why you reported F1 score instead of accuracy for the pairwise ranking datasets (e.g., Reward Bench)?

·

Hello @jmprcp , happy to see your interest in Glider and its evaluation. Since the pairwise datasets are mostly balanced, the F1 scores in the paper should be very similar to the accuracy scores for the model.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.14140 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.