Evaluating the Search Phase of Neural Architecture Search
C. Sciuto, K. Yu, M. Jaggi, C. Musat, and M. Salzmann. (2019)cite arxiv:1902.08142Comment: We find that random policy in NAS works amazingly well and propose an evaluation framework to have a fair comparison. 8 pages.
Abstract
Neural Architecture Search (NAS) aims to facilitate the design of deep
networks for new tasks. Existing techniques rely on two stages: searching over
the architecture space and validating the best architecture. NAS algorithms are
currently evaluated solely by comparing their results on the downstream task.
While intuitive, this fails to explicitly evaluate the effectiveness of their
search strategies. In this paper, we present a NAS evaluation framework that
includes the search phase. To this end, we compare the quality of the solutions
obtained by NAS search policies with that of random architecture selection. We
find that: (i) On average, the random policy outperforms state-of-the-art NAS
algorithms; (ii) The results and candidate rankings of NAS algorithms do not
reflect the true performance of the candidate architectures; and (iii) The
widely used weight sharing strategy negatively impacts the training of good
architectures, thus reducing the effectiveness of the search process. We
believe that following our evaluation framework will be key to designing NAS
strategies that truly discover superior architectures.
Description
Evaluating the Search Phase of Neural Architecture Search
cite arxiv:1902.08142Comment: We find that random policy in NAS works amazingly well and propose an evaluation framework to have a fair comparison. 8 pages
%0 Generic
%1 sciuto2019evaluating
%A Sciuto, Christian
%A Yu, Kaicheng
%A Jaggi, Martin
%A Musat, Claudiu
%A Salzmann, Mathieu
%D 2019
%K Architecture Learning Machine Neural
%T Evaluating the Search Phase of Neural Architecture Search
%U http://arxiv.org/abs/1902.08142
%X Neural Architecture Search (NAS) aims to facilitate the design of deep
networks for new tasks. Existing techniques rely on two stages: searching over
the architecture space and validating the best architecture. NAS algorithms are
currently evaluated solely by comparing their results on the downstream task.
While intuitive, this fails to explicitly evaluate the effectiveness of their
search strategies. In this paper, we present a NAS evaluation framework that
includes the search phase. To this end, we compare the quality of the solutions
obtained by NAS search policies with that of random architecture selection. We
find that: (i) On average, the random policy outperforms state-of-the-art NAS
algorithms; (ii) The results and candidate rankings of NAS algorithms do not
reflect the true performance of the candidate architectures; and (iii) The
widely used weight sharing strategy negatively impacts the training of good
architectures, thus reducing the effectiveness of the search process. We
believe that following our evaluation framework will be key to designing NAS
strategies that truly discover superior architectures.
@misc{sciuto2019evaluating,
abstract = {Neural Architecture Search (NAS) aims to facilitate the design of deep
networks for new tasks. Existing techniques rely on two stages: searching over
the architecture space and validating the best architecture. NAS algorithms are
currently evaluated solely by comparing their results on the downstream task.
While intuitive, this fails to explicitly evaluate the effectiveness of their
search strategies. In this paper, we present a NAS evaluation framework that
includes the search phase. To this end, we compare the quality of the solutions
obtained by NAS search policies with that of random architecture selection. We
find that: (i) On average, the random policy outperforms state-of-the-art NAS
algorithms; (ii) The results and candidate rankings of NAS algorithms do not
reflect the true performance of the candidate architectures; and (iii) The
widely used weight sharing strategy negatively impacts the training of good
architectures, thus reducing the effectiveness of the search process. We
believe that following our evaluation framework will be key to designing NAS
strategies that truly discover superior architectures.},
added-at = {2019-09-02T11:38:02.000+0200},
author = {Sciuto, Christian and Yu, Kaicheng and Jaggi, Martin and Musat, Claudiu and Salzmann, Mathieu},
biburl = {https://www.bibsonomy.org/bibtex/2a2efd7616fc84152d66820eacc964535/alpy94},
description = {Evaluating the Search Phase of Neural Architecture Search},
interhash = {5d34acad72df9feaafc831670b6390c7},
intrahash = {a2efd7616fc84152d66820eacc964535},
keywords = {Architecture Learning Machine Neural},
note = {cite arxiv:1902.08142Comment: We find that random policy in NAS works amazingly well and propose an evaluation framework to have a fair comparison. 8 pages},
timestamp = {2019-09-02T11:38:02.000+0200},
title = {Evaluating the Search Phase of Neural Architecture Search},
url = {http://arxiv.org/abs/1902.08142},
year = 2019
}