logo
ResearchBunny Logo
Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

Computer Science

Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

A. Alamia, V. Gauducheau, et al.

This fascinating study by Andrea Alamia, Victor Gauducheau, Dimitri Paisios, and Rufin VanRullen explores the competition between feedforward and recurrent neural networks in mimicking human behavior during artificial grammar learning. Discover how recurrent networks outperform their counterparts, especially in simpler grammar tasks, highlighting their potential in modeling explicit learning processes.

00:00
00:00
~3 min • Beginner • English
Abstract
In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can "learn" (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.
Publisher
Scientific Reports
Published On
Dec 17, 2020
Authors
Andrea Alamia, Victor Gauducheau, Dimitri Paisios, Rufin VanRullen
Tags
artificial grammar learning
neural networks
recurrent architecture
feedforward architecture
human behavior
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny