logo
ResearchBunny Logo
Abstract
Assessing Large Language Models (LLMs) with human evaluations is crucial for ensuring safety and effectiveness in healthcare. This study reviews existing literature on human evaluation methodologies for LLMs in healthcare, identifying gaps in reliability, generalizability, and applicability. To address these gaps, a comprehensive framework, QUEST, is proposed, covering planning, implementation, adjudication, scoring, and review phases. QUEST incorporates five evaluation principles: Quality of Information, Understanding and Reasoning, Expression Style and Persona, Safety and Harm, and Trust and Confidence.
Publisher
npj Digital Medicine
Published On
Sep 28, 2024
Authors
Thomas Yu Chow Tam, Sonish Sivarajkumar, Sumit Kapoor, Alisa V. Stolyar, Katelyn Polanska, Karleigh R. McCarthy, Hunter Osterhoudt, Xizhi Wu, Shyam Visweswaran, Sunyang Fu, Piyush Mathur, Giovanni E. Cacciamani, Cong Sun, Yifan Peng, Yanshan Wang
Tags
Large Language Models
healthcare
human evaluation
safety
evaluation framework
QUEST
trust
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny