logo
ResearchBunny Logo
A framework for human evaluation of large language models in healthcare derived from literature review

Medicine and Health

A framework for human evaluation of large language models in healthcare derived from literature review

T. Y. C. Tam, S. Sivarajkumar, et al.

This study by Thomas Yu Chow Tam, Sonish Sivarajkumar, and colleagues delves into the critical evaluation of Large Language Models in healthcare. It highlights the need for robust human evaluation methodologies and proposes the innovative QUEST framework to bridge existing gaps, ensuring safer and more effective AI applications in health.

00:00
00:00
Playback language: English
Abstract
Assessing Large Language Models (LLMs) with human evaluations is crucial for ensuring safety and effectiveness in healthcare. This study reviews existing literature on human evaluation methodologies for LLMs in healthcare, identifying gaps in reliability, generalizability, and applicability. To address these gaps, a comprehensive framework, QUEST, is proposed, covering planning, implementation, adjudication, scoring, and review phases. QUEST incorporates five evaluation principles: Quality of Information, Understanding and Reasoning, Expression Style and Persona, Safety and Harm, and Trust and Confidence.
Publisher
npj Digital Medicine
Published On
Sep 28, 2024
Authors
Thomas Yu Chow Tam, Sonish Sivarajkumar, Sumit Kapoor, Alisa V. Stolyar, Katelyn Polanska, Karleigh R. McCarthy, Hunter Osterhoudt, Xizhi Wu, Shyam Visweswaran, Sunyang Fu, Piyush Mathur, Giovanni E. Cacciamani, Cong Sun, Yifan Peng, Yanshan Wang
Tags
Large Language Models
healthcare
human evaluation
safety
evaluation framework
QUEST
trust
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny