logo
ResearchBunny Logo
Abstract
This research uses artificial recurrent neural networks (RNNs) trained with deep reinforcement learning (DRL) to simulate insect-like odour plume tracking. The agents successfully locate odour sources in dynamic, turbulent environments, exhibiting behaviours similar to real insects. Analysis reveals that the RNNs learn to compute task-relevant variables and display distinct dynamic structures in population activity, providing insights into memory requirements and neural dynamics in odour plume tracking.
Publisher
Nature Machine Intelligence
Published On
Jan 25, 2023
Authors
Satpreet H. Singh, Floris van Breugel, Rajesh P. N. Rao, Bingni W. Brunton
Tags
recurrent neural networks
deep reinforcement learning
odour plume tracking
insect behaviour
dynamic environments
neural dynamics
memory requirements
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny