Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214.
Description
[1611.01578] Neural Architecture Search with Reinforcement Learning
%0 Generic
%1 zoph2016neural
%A Zoph, Barret
%A Le, Quoc V.
%D 2016
%K 2016 deep-learning neural-networks reinforcement-learning
%T Neural Architecture Search with Reinforcement Learning
%U http://arxiv.org/abs/1611.01578
%X Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214.
@misc{zoph2016neural,
abstract = {Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214.},
added-at = {2017-10-18T16:53:38.000+0200},
author = {Zoph, Barret and Le, Quoc V.},
biburl = {https://www.bibsonomy.org/bibtex/24369f93009a6f577154235988b5139dc/achakraborty},
description = {[1611.01578] Neural Architecture Search with Reinforcement Learning},
interhash = {39ee88dbc3d50930a602050b3684b6d6},
intrahash = {4369f93009a6f577154235988b5139dc},
keywords = {2016 deep-learning neural-networks reinforcement-learning},
note = {cite arxiv:1611.01578},
timestamp = {2017-10-18T16:53:38.000+0200},
title = {Neural Architecture Search with Reinforcement Learning},
url = {http://arxiv.org/abs/1611.01578},
year = 2016
}