This research uses artificial recurrent neural networks (RNNs) trained with deep reinforcement learning (DRL) to simulate insect-like odour plume tracking. The agents successfully locate odour sources in dynamic, turbulent environments, exhibiting behaviours similar to real insects. Analysis reveals that the RNNs learn to compute task-relevant variables and display distinct dynamic structures in population activity, providing insights into memory requirements and neural dynamics in odour plume tracking.
Publisher
Nature Machine Intelligence
Published On
Jan 25, 2023
Authors
Satpreet H. Singh, Floris van Breugel, Rajesh P. N. Rao, Bingni W. Brunton
Tags
recurrent neural networks
deep reinforcement learning
odour plume tracking
insect behaviour
dynamic environments
neural dynamics
memory requirements
Related Publications
Explore these studies to deepen your understanding of the subject.