logo
ResearchBunny Logo
Zero-shot visual reasoning through probabilistic analogical mapping

Psychology

Zero-shot visual reasoning through probabilistic analogical mapping

T. Webb, S. Fu, et al.

Discover how human-like reasoning in visual context can outperform current algorithms. The innovative visiPAM model, developed by Taylor Webb, Shuhao Fu, Trevor Bihl, Keith J. Holyoak, and Hongjing Lu, showcases remarkable performance on analogical mapping tasks, closely resembling human capabilities.

00:00
00:00
Playback language: English
Abstract
Human reasoning excels at identifying abstract commonalities across dissimilar visual inputs. Current algorithms require extensive training, limiting generalization. This paper introduces visiPAM (visual Probabilistic Analogical Mapping), synthesizing learned representations from visual inputs with a similarity-based mapping operation from cognitive science. Without direct training, visiPAM outperforms a state-of-the-art deep learning model on an analogical mapping task and closely matches human performance on a novel 3D object mapping task.
Publisher
Nature Communications
Published On
Aug 24, 2023
Authors
Taylor Webb, Shuhao Fu, Trevor Bihl, Keith J. Holyoak, Hongjing Lu
Tags
analogical mapping
visual reasoning
machine learning
cognitive science
deep learning
3D object mapping
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny