Distributed Representations of Sentences and Documents
Q. Le, and T. Mikolov. Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, page 1188--1196. Bejing, China, PMLR, (June 2014)
Abstract
Many machine learning algorithms require the input to be represented as a fixed length feature vector. When it comes to texts, one of the most common representations is bag-of-words. Despite their popularity, bag-of-words models have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose an unsupervised algorithm that learns vector representations of sentences and text documents. This algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that our technique outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
%0 Conference Paper
%1 le2014distributed
%A Le, Quoc
%A Mikolov, Tomas
%B Proceedings of the 31st International Conference on Machine Learning
%C Bejing, China
%D 2014
%E Xing, Eric P.
%E Jebara, Tony
%I PMLR
%K NLP word2vec
%N 2
%P 1188--1196
%T Distributed Representations of Sentences and Documents
%U https://proceedings.mlr.press/v32/le14.html
%V 32
%X Many machine learning algorithms require the input to be represented as a fixed length feature vector. When it comes to texts, one of the most common representations is bag-of-words. Despite their popularity, bag-of-words models have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose an unsupervised algorithm that learns vector representations of sentences and text documents. This algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that our technique outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
@inproceedings{le2014distributed,
abstract = {Many machine learning algorithms require the input to be represented as a fixed length feature vector. When it comes to texts, one of the most common representations is bag-of-words. Despite their popularity, bag-of-words models have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose an unsupervised algorithm that learns vector representations of sentences and text documents. This algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that our technique outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.},
added-at = {2023-01-04T05:05:54.000+0100},
address = {Bejing, China},
author = {Le, Quoc and Mikolov, Tomas},
biburl = {https://www.bibsonomy.org/bibtex/244788caea495753cfb185e92852692db/andolab},
booktitle = {Proceedings of the 31st International Conference on Machine Learning},
editor = {Xing, Eric P. and Jebara, Tony},
interhash = {8dfe5d45e8d4b9c4812fd5590fbadeff},
intrahash = {44788caea495753cfb185e92852692db},
keywords = {NLP word2vec},
month = jun,
number = 2,
pages = {1188--1196},
pdf = {http://proceedings.mlr.press/v32/le14.pdf},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
timestamp = {2023-03-06T19:02:34.000+0100},
title = {Distributed Representations of Sentences and Documents},
url = {https://proceedings.mlr.press/v32/le14.html},
volume = 32,
year = 2014
}