A Patterns Framework for Incorporating Structure in Deep Reinforcement Learning
- authored by
- Aditya Mohan, Amy Zhang, Marius Lindauer
- Abstract
Reinforcement Learning (RL), empowered by Deep Neural Networks (DNNs) for function approximation, has achieved notable success in diverse applications. However, its applicability to real-world scenarios with complex dynamics, noisy signals, and large state and action spaces remains limited due to challenges in data efficiency, generalization, safety guarantees, and interpretability, among other factors. To overcome these challenges, one promising avenue is to incorporate additional structural information about the problem into the RL learning process. Various sub-fields of RL have proposed methods for incorporating such inductive biases. We amalgamate these diverse methodologies under a unified framework, shedding light on the role of structure in the learning problem, and classify these methods into distinct patterns of incorporating structure that address different auxiliary objectives. By leveraging this comprehensive framework, we provide valuable insights into the challenges of integrating structure into RL and lay the groundwork for a design pattern perspective on RL research. This novel perspective paves the way for future advancements and aids in developing more effective and efficient RL algorithms that can better handle real-world scenarios.
- Organisation(s)
-
Machine Learning Section
Institute of Artificial Intelligence
- External Organisation(s)
-
Meta AI
University of Texas at Austin
- Type
- Conference abstract
- Publication date
- 17.09.2023
- Publication status
- Accepted/In press
- Peer reviewed
- Yes
- Sustainable Development Goals
- SDG 3 - Good Health and Well-being
- Electronic version(s)
-
https://openreview.net/forum?id=KkKWsPLlAx (Access:
Open)