logo
ResearchBunny Logo
Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

Computer Science

Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

A. Alamia, V. Gauducheau, et al.

This fascinating study by Andrea Alamia, Victor Gauducheau, Dimitri Paisios, and Rufin VanRullen explores the competition between feedforward and recurrent neural networks in mimicking human behavior during artificial grammar learning. Discover how recurrent networks outperform their counterparts, especially in simpler grammar tasks, highlighting their potential in modeling explicit learning processes.

00:00
00:00
Playback language: English
Abstract
This study investigates which neural network architecture (feedforward vs. recurrent) best matches human behavior in artificial grammar learning. Human subjects and both feedforward and recurrent networks were tested on four grammars of varying complexity. Results show that both architectures can learn the grammars, but recurrent networks perform closer to human behavior, particularly with simpler, more explicit grammars, suggesting recurrent networks better model explicit learning while feedforward networks may capture implicit learning dynamics.
Publisher
Scientific Reports
Published On
Dec 17, 2020
Authors
Andrea Alamia, Victor Gauducheau, Dimitri Paisios, Rufin VanRullen
Tags
artificial grammar learning
neural networks
recurrent architecture
feedforward architecture
human behavior
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny