logo
ResearchBunny Logo
"Why is this misleading?": Detecting News Headline Hallucinations with Explanations

Computer Science

"Why is this misleading?": Detecting News Headline Hallucinations with Explanations

J. Shen, J. Liu, et al.

Discover ExHalder, a groundbreaking framework designed to detect news headline hallucinations. This innovative approach, developed by researchers from Google Research, utilizes insights from public natural language inference datasets to enhance news understanding and generate clear explanations for its findings.

00:00
00:00
~3 min • Beginner • English
Abstract
Automatic headline generation enables users to comprehend ongoing news events promptly and has recently become an important task in web mining and natural language processing. With the growing need for news headline generation, we argue that the hallucination issue, namely the generated headlines being not supported by the original news stories, is a critical challenge for the deployment of this feature in web-scale systems Meanwhile, due to the infrequency of hallucination cases and the requirement of careful reading for raters to reach the correct consensus, it is difficult to acquire a large dataset for training a model to detect such hallucinations through human curation. In this work, we present a new framework named ExHalder to address this challenge for headline hallucination detection. ExHalder adapts the knowledge from public natural language inference datasets into the news domain and learns to generate natural language sentences to explain the hallucination detection results. To evaluate the model performance, we carefully collect a dataset with more than six thousand labeled ⟨article, headline⟩ pairs. Extensive experiments on this dataset and another six public ones demonstrate that ExHalder can identify hallucinated headlines accurately and justifies its predictions with human-readable natural language explanations.
Publisher
Proceedings of the ACM Web Conference 2023 (WWW'23)
Published On
May 01, 2023
Authors
Jiaming Shen, Jialu Liu, Dan Finnie, Negar Rahmati, Michael Bendersky, Marc Najork
Tags
news headline hallucination
natural language inference
framework
ExHalder
explanations
state-of-the-art performance
dataset
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny