logo
ResearchBunny Logo
Operator World Models for Reinforcement Learning

Computer Science

Operator World Models for Reinforcement Learning

P. Novelli, M. Pontil, et al.

Policy Mirror Descent (PMD) is powerful but hard to apply in Reinforcement Learning because action-value functions are not directly accessible. This work learns a world model via conditional mean embeddings and—using operator-theoretic matrix operations—derives closed-form action-value estimates. Combining these with PMD yields POWR, an RL algorithm with proven global convergence; this research was conducted by Pietro Novelli, Massimiliano Pontil, Marco Pratticò, and Carlo Ciliberto.... show more
Abstract
Policy Mirror Descent (PMD) is a powerful and theoretically sound methodology for sequential decision-making. However, it is not directly applicable to Reinforcement Learning (RL) due to the inaccessibility of explicit action-value functions. We address this challenge by introducing a novel approach based on learning a world model of the environment using conditional mean embeddings. Leveraging tools from operator theory we derive a closed-form expression of the action-value function in terms of the world model via simple matrix operations. Combining these estimators with PMD leads to POWR, a new RL algorithm for which we prove convergence rates to the global optimum. Preliminary experiments in finite and infinite state settings support the effectiveness of our method.
Publisher
NeurIPS 2024 (Conference on Neural Information Processing Systems)
Published On
Authors
Pietro Novelli, Massimiliano Pontil, Marco Pratticò, Carlo Ciliberto
Tags
Policy Mirror Descent
Reinforcement Learning
conditional mean embeddings
world model
operator theory
action-value function
POWR
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny