Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, и A. Efros. (2018)cite arxiv:1808.04355Comment: First three authors contributed equally and ordered alphabetically. Website at https://pathak22.github.io/large-scale-curiosity/.
Аннотация
Reinforcement learning algorithms rely on carefully engineering environment
rewards that are extrinsic to the agent. However, annotating each environment
with hand-designed, dense rewards is not scalable, motivating the need for
developing reward functions that are intrinsic to the agent. Curiosity is a
type of intrinsic reward function which uses prediction error as reward signal.
In this paper: (a) We perform the first large-scale study of purely
curiosity-driven learning, i.e. without any extrinsic rewards, across 54
standard benchmark environments, including the Atari game suite. Our results
show surprisingly good performance, and a high degree of alignment between the
intrinsic curiosity objective and the hand-designed extrinsic rewards of many
game environments. (b) We investigate the effect of using different feature
spaces for computing prediction error and show that random features are
sufficient for many popular RL game benchmarks, but learned features appear to
generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We
demonstrate limitations of the prediction-based rewards in stochastic setups.
Game-play videos and code are at
https://pathak22.github.io/large-scale-curiosity/
Описание
[1808.04355] Large-Scale Study of Curiosity-Driven Learning
cite arxiv:1808.04355Comment: First three authors contributed equally and ordered alphabetically. Website at https://pathak22.github.io/large-scale-curiosity/
%0 Generic
%1 burda2018largescale
%A Burda, Yuri
%A Edwards, Harri
%A Pathak, Deepak
%A Storkey, Amos
%A Darrell, Trevor
%A Efros, Alexei A.
%D 2018
%K curiosity deeplearning learning reinforcement
%T Large-Scale Study of Curiosity-Driven Learning.
%U http://arxiv.org/abs/1808.04355
%X Reinforcement learning algorithms rely on carefully engineering environment
rewards that are extrinsic to the agent. However, annotating each environment
with hand-designed, dense rewards is not scalable, motivating the need for
developing reward functions that are intrinsic to the agent. Curiosity is a
type of intrinsic reward function which uses prediction error as reward signal.
In this paper: (a) We perform the first large-scale study of purely
curiosity-driven learning, i.e. without any extrinsic rewards, across 54
standard benchmark environments, including the Atari game suite. Our results
show surprisingly good performance, and a high degree of alignment between the
intrinsic curiosity objective and the hand-designed extrinsic rewards of many
game environments. (b) We investigate the effect of using different feature
spaces for computing prediction error and show that random features are
sufficient for many popular RL game benchmarks, but learned features appear to
generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We
demonstrate limitations of the prediction-based rewards in stochastic setups.
Game-play videos and code are at
https://pathak22.github.io/large-scale-curiosity/
@misc{burda2018largescale,
abstract = {Reinforcement learning algorithms rely on carefully engineering environment
rewards that are extrinsic to the agent. However, annotating each environment
with hand-designed, dense rewards is not scalable, motivating the need for
developing reward functions that are intrinsic to the agent. Curiosity is a
type of intrinsic reward function which uses prediction error as reward signal.
In this paper: (a) We perform the first large-scale study of purely
curiosity-driven learning, i.e. without any extrinsic rewards, across 54
standard benchmark environments, including the Atari game suite. Our results
show surprisingly good performance, and a high degree of alignment between the
intrinsic curiosity objective and the hand-designed extrinsic rewards of many
game environments. (b) We investigate the effect of using different feature
spaces for computing prediction error and show that random features are
sufficient for many popular RL game benchmarks, but learned features appear to
generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We
demonstrate limitations of the prediction-based rewards in stochastic setups.
Game-play videos and code are at
https://pathak22.github.io/large-scale-curiosity/},
added-at = {2018-11-27T11:14:56.000+0100},
author = {Burda, Yuri and Edwards, Harri and Pathak, Deepak and Storkey, Amos and Darrell, Trevor and Efros, Alexei A.},
biburl = {https://www.bibsonomy.org/bibtex/2981f3783d55ca525404ecbfa50b3adcd/thoni},
description = {[1808.04355] Large-Scale Study of Curiosity-Driven Learning},
interhash = {d6334dcfe10afa045c724c835f521ac5},
intrahash = {981f3783d55ca525404ecbfa50b3adcd},
keywords = {curiosity deeplearning learning reinforcement},
note = {cite arxiv:1808.04355Comment: First three authors contributed equally and ordered alphabetically. Website at https://pathak22.github.io/large-scale-curiosity/},
timestamp = {2018-11-27T11:14:56.000+0100},
title = {Large-Scale Study of Curiosity-Driven Learning.},
url = {http://arxiv.org/abs/1808.04355},
year = 2018
}