Approximate reinforcement learning: An overview


Reference:
L. Busoniu, D. Ernst, B. De Schutter, and R. Babuska, "Approximate reinforcement learning: An overview," Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2011), Paris, France, pp. 1-8, Apr. 2011.

Abstract:
Reinforcement learning (RL) allows agents to learn how to optimally interact with complex environments. Fueled by recent advances in approximation-based algorithms, RL has obtained impressive successes in robotics, artificial intelligence, control, operations research, etc. However, the scarcity of survey papers about approximate RL makes it difficult for newcomers to grasp this intricate field. With the present overview, we take a step toward alleviating this situation. We review methods for approximate RL, starting from their dynamic programming roots and organizing them into three major classes: approximate value iteration, policy iteration, and policy search. Each class is subdivided into representative categories, highlighting among others offline and online algorithms, policy gradient methods, and simulation-based techniques. We also compare the different categories of methods, and outline possible ways to enhance the reviewed algorithms.


Downloads:
 * Corresponding technical report: pdf file (201 KB)
      Note: More information on the pdf file format mentioned above can be found here.


Bibtex entry:

@inproceedings{BusMun:11-008,
        author={L. Bu{\c{s}}oniu and D. Ernst and B. {D}e Schutter and R. Babu{\v{s}}ka},
        title={Approximate reinforcement learning: An overview},
        booktitle={Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2011)},
        address={Paris, France},
        pages={1--8},
        month=apr,
        year={2011}
        }



Go to the publications overview page.


This page is maintained by Bart De Schutter. Last update: March 21, 2022.