Reinforcement Learning in Changing Environments: Statistical Complexity and Efficient Algorithms
Haolin Liu
UVA CS
Time: 2025-03-26, 12:00 - 13:00 ET
Location: Olsson 105 and Zoom
Abstract Real-world agents are expected to continually learn in dynamic environments, effectively interact within multi-agent systems, and align with human values. Interestingly, many methods developed to address these challenges share a unified theoretical framework – reinforcement learning in changing environments. This framework assumes that the underlying Markov decision process evolves over time, requiring algorithms that enable agents to adapt their policies dynamically to minimize regret, thereby capturing the non-stationary nature of continual learning. It also serves as a foundation for multi-agent learning, as an agent’s reward is influenced by others’ actions, introducing non-stationarity and necessitating robust algorithms. Furthermore, recent research has explored the game-theoretic formulation of alignment in reinforcement learning with human feedback (RLHF), reducing the problem to reinforcement learning in changing environments.
In this talk, I will present our recent results about reinforcement learning in changing environments from a theoretical perspective. Specifically, in high-dimensional state-action spaces, I will first discuss the minimal structural assumptions sufficient and necessary for ensuring statistically efficient learning, extending standard reinforcement learning complexity measures. Next, I will introduce computationally efficient algorithms tailored for specific cases. Finally, I will highlight open problems in this direction, outlining key challenges and future research opportunities.
Bio Haolin Liu is a PhD candidate in Computer Science at the University of Virginia, advised by Professor Chen-Yu Wei. His research focuses on reinforcement learning theory and its application to language models, with works published in NeurIPS and ICLR with spotlight recognition. More details can be found on his homepage https://liuhl2000.github.io/