Y. Kim, K. Stratos, and D. Kim. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), page 643--653. Vancouver, Canada, Association for Computational Linguistics, (July 2017)
DOI: 10.18653/v1/P17-1060
Abstract
An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision given K existing domains. One approach is to retrain a global model across all K + 1 domains using standard techniques, for instance Daumé III (2009). However, it is desirable to adapt without having to re-estimate a global model from scratch each time a new domain with potentially new intents and slots is added. We describe a solution based on attending an ensemble of domain experts. We assume K domain specific intent and slot models trained on respective domains. When given domain K + 1, our model uses a weighted combination of the K domain experts' feedback along with its own opinion to make predictions on the new domain. In experiments, the model significantly outperforms baselines that do not use domain adaptation and also performs better than the full retraining approach.
%0 Conference Paper
%1 kim2017domain
%A Kim, Young-Bum
%A Stratos, Karl
%A Kim, Dongchan
%B Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
%C Vancouver, Canada
%D 2017
%I Association for Computational Linguistics
%K attention blstm classification domain ensemble intent lstm network neural plk slot tagging
%P 643--653
%R 10.18653/v1/P17-1060
%T Domain Attention with an Ensemble of Experts
%U https://aclanthology.org/P17-1060
%X An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision given K existing domains. One approach is to retrain a global model across all K + 1 domains using standard techniques, for instance Daumé III (2009). However, it is desirable to adapt without having to re-estimate a global model from scratch each time a new domain with potentially new intents and slots is added. We describe a solution based on attending an ensemble of domain experts. We assume K domain specific intent and slot models trained on respective domains. When given domain K + 1, our model uses a weighted combination of the K domain experts' feedback along with its own opinion to make predictions on the new domain. In experiments, the model significantly outperforms baselines that do not use domain adaptation and also performs better than the full retraining approach.
@inproceedings{kim2017domain,
abstract = {An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision given K existing domains. One approach is to retrain a global model across all K + 1 domains using standard techniques, for instance Daum{\'e} III (2009). However, it is desirable to adapt without having to re-estimate a global model from scratch each time a new domain with potentially new intents and slots is added. We describe a solution based on attending an ensemble of domain experts. We assume K domain specific intent and slot models trained on respective domains. When given domain K + 1, our model uses a weighted combination of the K domain experts{'} feedback along with its own opinion to make predictions on the new domain. In experiments, the model significantly outperforms baselines that do not use domain adaptation and also performs better than the full retraining approach.},
added-at = {2023-10-24T08:47:41.000+0200},
address = {Vancouver, Canada},
author = {Kim, Young-Bum and Stratos, Karl and Kim, Dongchan},
biburl = {https://www.bibsonomy.org/bibtex/27f4445076396a1d289196871b4133cf3/jaeschke},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
doi = {10.18653/v1/P17-1060},
interhash = {6ec0b4bf3a089fb797d2ba93b7ee0438},
intrahash = {7f4445076396a1d289196871b4133cf3},
keywords = {attention blstm classification domain ensemble intent lstm network neural plk slot tagging},
month = jul,
pages = {643--653},
publisher = {Association for Computational Linguistics},
timestamp = {2023-10-24T08:47:41.000+0200},
title = {Domain Attention with an Ensemble of Experts},
url = {https://aclanthology.org/P17-1060},
year = 2017
}