logo
ResearchBunny Logo
Efficient Detection of Stigmatizing Language in Electronic Health Records via In-Context Learning: A Comparative Analysis and Validation Study

Medicine and Health

Efficient Detection of Stigmatizing Language in Electronic Health Records via In-Context Learning: A Comparative Analysis and Validation Study

H. Chen, M. Alfred, et al.

This groundbreaking study by Hongbo Chen, Myrtede Alfred, and Eldan Cohen delves into the effectiveness of in-context learning (ICL) for identifying stigmatizing language in Electronic Health Records. Remarkably, ICL surpassed traditional methods with superior performance despite utilizing less data, highlighting its potential in bias reduction within healthcare documentation.

00:00
00:00
Playback language: English
Abstract
This study investigates the efficacy of in-context learning (ICL) in detecting stigmatizing language in Electronic Health Records (EHRs) under data-scarce conditions. The performance of ICL, using four prompting strategies (Generic, Chain of Thought, Clue and Reasoning Prompting, and a novel Stigma Detection Heuristic Prompt), was compared to zero-shot (textual entailment), few-shot (SetFit), and supervised fine-tuning approaches. ICL significantly outperformed zero-shot and few-shot methods, achieving an F1 score comparable to the supervised model despite using significantly less data. Fairness evaluation revealed that supervised fine-tuning models exhibited greater bias than ICL models.
Publisher
JMIR Medical Informatics
Published On
Nov 20, 2024
Authors
Hongbo Chen, Myrtede Alfred, Eldan Cohen
Tags
in-context learning
stigmatizing language
Electronic Health Records
bias evaluation
data-scarce conditions
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny