logo
ResearchBunny Logo
MASTERING MEMORY TASKS WITH WORLD MODELS

Computer Science

MASTERING MEMORY TASKS WITH WORLD MODELS

M. R. Samsami, A. Zholus, et al.

Model-based RL agents struggle with long-term dependencies—Recall to Imagine (R2I) fixes this by integrating a new family of state space models into world models to boost long-term memory and long-horizon credit assignment. R2I sets new state-of-the-art on memory and credit-assignment benchmarks like BSuite and POPGym, achieves superhuman results on Memory Maze, matches performance on Atari and DMC, and converges faster than DreamerV3. This research was conducted by Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, and Sarath Chandar.

00:00
00:00
~3 min • Beginner • English
Abstract
Current model-based reinforcement learning (MBRL) agents struggle with long-term dependencies. This limits their ability to effectively solve tasks involving extended time gaps between actions and outcomes, or tasks demanding the recalling of distant observations to inform current actions. To improve temporal coherence, we integrate a new family of state space models (SSMs) in world models of MBRL agents to present a new method, Recall to Imagine (R2I). This integration aims to enhance both long-term memory and long-horizon credit assignment. Through a diverse set of illustrative tasks, we systematically demonstrate that R2I not only establishes a new state-of-the-art for challenging memory and credit assignment RL tasks, such as BSuite and POPGym, but also showcases superhuman performance in the complex memory domain of Memory Maze. At the same time, it upholds comparable performance in classic RL tasks, such as Atari and DMC, suggesting the generality of our method. We also show that R2I is faster than the state-of-the-art MBRL method, DreamerV3, resulting in faster wall-time convergence.
Publisher
International Conference on Learning Representations (ICLR) 2024
Published On
Authors
Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, Sarath Chandar
Tags
Model-based reinforcement learning
State space models (SSMs)
Long-term memory
Long-horizon credit assignment
World models
Recall to Imagine (R2I)
Benchmarks: BSuite, POPGym, Memory Maze
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny