Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large
labeled training sets are available, they cannot be used to map sequences to
sequences. In this paper, we present a general end-to-end approach to sequence
learning that makes minimal assumptions on the sequence structure. Our method
uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to
a vector of a fixed dimensionality, and then another deep LSTM to decode the
target sequence from the vector. Our main result is that on an English to
French translation task from the WMT'14 dataset, the translations produced by
the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's
BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did
not have difficulty on long sentences. For comparison, a phrase-based SMT
system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM
to rerank the 1000 hypotheses produced by the aforementioned SMT system, its
BLEU score increases to 36.5, which is close to the previous best result on
this task. The LSTM also learned sensible phrase and sentence representations
that are sensitive to word order and are relatively invariant to the active and
the passive voice. Finally, we found that reversing the order of the words in
all source sentences (but not target sentences) improved the LSTM's performance
markedly, because doing so introduced many short term dependencies between the
source and the target sentence which made the optimization problem easier.
Description
Sequence to Sequence Learning with Neural Networks
%0 Generic
%1 sutskever2014sequence
%A Sutskever, Ilya
%A Vinyals, Oriol
%A Le, Quoc V.
%D 2014
%K deep_learning final sequential_learning thema:pointer_networks
%T Sequence to Sequence Learning with Neural Networks
%U http://arxiv.org/abs/1409.3215
%X Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large
labeled training sets are available, they cannot be used to map sequences to
sequences. In this paper, we present a general end-to-end approach to sequence
learning that makes minimal assumptions on the sequence structure. Our method
uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to
a vector of a fixed dimensionality, and then another deep LSTM to decode the
target sequence from the vector. Our main result is that on an English to
French translation task from the WMT'14 dataset, the translations produced by
the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's
BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did
not have difficulty on long sentences. For comparison, a phrase-based SMT
system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM
to rerank the 1000 hypotheses produced by the aforementioned SMT system, its
BLEU score increases to 36.5, which is close to the previous best result on
this task. The LSTM also learned sensible phrase and sentence representations
that are sensitive to word order and are relatively invariant to the active and
the passive voice. Finally, we found that reversing the order of the words in
all source sentences (but not target sentences) improved the LSTM's performance
markedly, because doing so introduced many short term dependencies between the
source and the target sentence which made the optimization problem easier.
@misc{sutskever2014sequence,
abstract = {Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large
labeled training sets are available, they cannot be used to map sequences to
sequences. In this paper, we present a general end-to-end approach to sequence
learning that makes minimal assumptions on the sequence structure. Our method
uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to
a vector of a fixed dimensionality, and then another deep LSTM to decode the
target sequence from the vector. Our main result is that on an English to
French translation task from the WMT'14 dataset, the translations produced by
the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's
BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did
not have difficulty on long sentences. For comparison, a phrase-based SMT
system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM
to rerank the 1000 hypotheses produced by the aforementioned SMT system, its
BLEU score increases to 36.5, which is close to the previous best result on
this task. The LSTM also learned sensible phrase and sentence representations
that are sensitive to word order and are relatively invariant to the active and
the passive voice. Finally, we found that reversing the order of the words in
all source sentences (but not target sentences) improved the LSTM's performance
markedly, because doing so introduced many short term dependencies between the
source and the target sentence which made the optimization problem easier.},
added-at = {2019-11-15T09:02:08.000+0100},
author = {Sutskever, Ilya and Vinyals, Oriol and Le, Quoc V.},
biburl = {https://www.bibsonomy.org/bibtex/2c9dcc176d9b66adfa84402bfab5e45ed/treemage},
description = {Sequence to Sequence Learning with Neural Networks},
interhash = {f051dfa019b32d710821fc5b4219bd49},
intrahash = {c9dcc176d9b66adfa84402bfab5e45ed},
keywords = {deep_learning final sequential_learning thema:pointer_networks},
timestamp = {2019-11-15T09:02:08.000+0100},
title = {Sequence to Sequence Learning with Neural Networks},
url = {http://arxiv.org/abs/1409.3215},
year = 2014
}