The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as the learned configuration is static, i.e., parameter settings remain fixed throughout the run. However, it has been shown that some algorithm parameters are best adjusted dynamically during execution, e.g., to adapt to the current part of the optimization landscape. Thus far, this is most commonly achieved through hand-crafted heuristics. A promising recent alternative is to automatically learn such dynamic parameter adaptation policies from data. In this article, we give the first comprehensive account of this new field of automated dynamic algorithm configuration (DAC), present a series of recent advances, and provide a solid foundation for future research in this field. Specifically, we (i) situate DAC in the broader historical context of AI research; (ii) formalize DAC as a computational problem; (iii) identify the methods used in prior-art to tackle this problem; (iv) conduct empirical case studies for using DAC in evolutionary optimization, AI planning, and machine learning.
%0 Journal Article
%1 2872944e4e864ceeb0fb4ba913b231b8
%A Adriaensen, Steven
%A Biedenkapp, André
%A Shala, Gresa
%A Awad, Noor
%A Eimer, Theresa
%A Lindauer, Marius
%A Hutter, Frank
%D 2022
%I Morgan Kaufmann Publishers, Inc.
%J Journal of Artificial Intelligence Research
%K automl leibnizailab myown
%T Automated Dynamic Algorithm Configuration
%X The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as the learned configuration is static, i.e., parameter settings remain fixed throughout the run. However, it has been shown that some algorithm parameters are best adjusted dynamically during execution, e.g., to adapt to the current part of the optimization landscape. Thus far, this is most commonly achieved through hand-crafted heuristics. A promising recent alternative is to automatically learn such dynamic parameter adaptation policies from data. In this article, we give the first comprehensive account of this new field of automated dynamic algorithm configuration (DAC), present a series of recent advances, and provide a solid foundation for future research in this field. Specifically, we (i) situate DAC in the broader historical context of AI research; (ii) formalize DAC as a computational problem; (iii) identify the methods used in prior-art to tackle this problem; (iv) conduct empirical case studies for using DAC in evolutionary optimization, AI planning, and machine learning.
@article{2872944e4e864ceeb0fb4ba913b231b8,
abstract = {The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as the learned configuration is static, i.e., parameter settings remain fixed throughout the run. However, it has been shown that some algorithm parameters are best adjusted dynamically during execution, e.g., to adapt to the current part of the optimization landscape. Thus far, this is most commonly achieved through hand-crafted heuristics. A promising recent alternative is to automatically learn such dynamic parameter adaptation policies from data. In this article, we give the first comprehensive account of this new field of automated dynamic algorithm configuration (DAC), present a series of recent advances, and provide a solid foundation for future research in this field. Specifically, we (i) situate DAC in the broader historical context of AI research; (ii) formalize DAC as a computational problem; (iii) identify the methods used in prior-art to tackle this problem; (iv) conduct empirical case studies for using DAC in evolutionary optimization, AI planning, and machine learning.},
added-at = {2023-03-16T14:23:40.000+0100},
author = {Adriaensen, Steven and Biedenkapp, Andr{\'e} and Shala, Gresa and Awad, Noor and Eimer, Theresa and Lindauer, Marius and Hutter, Frank},
biburl = {https://www.bibsonomy.org/bibtex/2b8a601f6a428218aa010c820ebf47211/ail3s},
day = 27,
interhash = {856430fff2f6e397f1b8c17a77d69070},
intrahash = {b8a601f6a428218aa010c820ebf47211},
issn = {1076-9757},
journal = {Journal of Artificial Intelligence Research},
keywords = {automl leibnizailab myown},
language = {English},
month = may,
publisher = {Morgan Kaufmann Publishers, Inc.},
timestamp = {2023-03-16T14:23:40.000+0100},
title = {Automated Dynamic Algorithm Configuration},
year = 2022
}