Article,

A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients

, , , and .
IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42 (6): 1291-1307 (November 2012)
DOI: 10.1109/TSMCC.2012.2218595

Abstract

Policy-gradient-based actor-critic algorithms are amongst the most popular algorithms in the reinforcement learning framework. Their advantage of being able to search for optimal policies using low-variance gradient estimates has made them useful in several real-life applications, such as robotics, power control, and finance. Although general surveys on reinforcement learning techniques already exist, no survey is specifically dedicated to actor-critic algorithms in particular. This paper, therefore, describes the state of the art of actor-critic algorithms, with a focus on methods that can work in an online setting and use function approximation in order to deal with continuous state and action spaces. After starting with a discussion on the concepts of reinforcement learning and the origins of actor-critic algorithms, this paper describes the workings of the natural gradient, which has made its way into many actor-critic algorithms over the past few years. A review of several standard and natural actor-critic algorithms is given, and the paper concludes with an overview of application areas and a discussion on open issues.

Tags

Users

  • @lanteunis

Comments and Reviews