PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks
J. Tang, M. Qu, und Q. Mei. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15, (2015)arXiv: 1508.00200.
DOI: 10.1145/2783258.2783307
Zusammenfassung
Unsupervised text embedding methods, such as Skip-gram and Paragraph Vector, have been attracting increasing attention due to their simplicity, scalability, and effectiveness. However, comparing to sophisticated deep learning architectures such as convolutional neural networks, these methods usually yield inferior results when applied to particular machine learning tasks. One possible reason is that these text embedding methods learn the representation of text in a fully unsupervised way, without leveraging the labeled information available for the task. Although the low dimensional representations learned are applicable to many different tasks, they are not particularly tuned for any task. In this paper, we fill this gap by proposing a semi-supervised representation learning method for text data, which we call the predictive text embedding (PTE). Predictive text embedding utilizes both labeled and unlabeled data to learn the embedding of text. The labeled information and different levels of word co-occurrence information are first represented as a large-scale heterogeneous text network, which is then embedded into a low dimensional space through a principled and efficient algorithm. This low dimensional embedding not only preserves the semantic closeness of words and documents, but also has a strong predictive power for the particular task. Compared to recent supervised approaches based on convolutional neural networks, predictive text embedding is comparable or more effective, much more efficient, and has fewer parameters to tune.
Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15
Seiten
1165--1174
shorttitle
PTE
language
en
file
Tang et al - PTE ~ Predictive text embedding through Large-scale Heterogeous Text Networks.pdf:C\:\\Users\\Admin\\Documents\\Research\\_Paperbase\\Graph Embeddings\\Tang et al - PTE ~ Predictive text embedding through Large-scale Heterogeous Text Networks.pdf:application/pdf
%0 Journal Article
%1 tang_pte_2015
%A Tang, Jian
%A Qu, Meng
%A Mei, Qiaozhu
%D 2015
%J Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15
%K Embedding_Algorithm Node_Embeddings Skip-Gram Word_Embeddings
%P 1165--1174
%R 10.1145/2783258.2783307
%T PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks
%U http://arxiv.org/abs/1508.00200
%X Unsupervised text embedding methods, such as Skip-gram and Paragraph Vector, have been attracting increasing attention due to their simplicity, scalability, and effectiveness. However, comparing to sophisticated deep learning architectures such as convolutional neural networks, these methods usually yield inferior results when applied to particular machine learning tasks. One possible reason is that these text embedding methods learn the representation of text in a fully unsupervised way, without leveraging the labeled information available for the task. Although the low dimensional representations learned are applicable to many different tasks, they are not particularly tuned for any task. In this paper, we fill this gap by proposing a semi-supervised representation learning method for text data, which we call the predictive text embedding (PTE). Predictive text embedding utilizes both labeled and unlabeled data to learn the embedding of text. The labeled information and different levels of word co-occurrence information are first represented as a large-scale heterogeneous text network, which is then embedded into a low dimensional space through a principled and efficient algorithm. This low dimensional embedding not only preserves the semantic closeness of words and documents, but also has a strong predictive power for the particular task. Compared to recent supervised approaches based on convolutional neural networks, predictive text embedding is comparable or more effective, much more efficient, and has fewer parameters to tune.
@article{tang_pte_2015,
abstract = {Unsupervised text embedding methods, such as Skip-gram and Paragraph Vector, have been attracting increasing attention due to their simplicity, scalability, and effectiveness. However, comparing to sophisticated deep learning architectures such as convolutional neural networks, these methods usually yield inferior results when applied to particular machine learning tasks. One possible reason is that these text embedding methods learn the representation of text in a fully unsupervised way, without leveraging the labeled information available for the task. Although the low dimensional representations learned are applicable to many different tasks, they are not particularly tuned for any task. In this paper, we fill this gap by proposing a semi-supervised representation learning method for text data, which we call the predictive text embedding (PTE). Predictive text embedding utilizes both labeled and unlabeled data to learn the embedding of text. The labeled information and different levels of word co-occurrence information are first represented as a large-scale heterogeneous text network, which is then embedded into a low dimensional space through a principled and efficient algorithm. This low dimensional embedding not only preserves the semantic closeness of words and documents, but also has a strong predictive power for the particular task. Compared to recent supervised approaches based on convolutional neural networks, predictive text embedding is comparable or more effective, much more efficient, and has fewer parameters to tune.},
added-at = {2020-02-21T16:09:44.000+0100},
author = {Tang, Jian and Qu, Meng and Mei, Qiaozhu},
biburl = {https://www.bibsonomy.org/bibtex/25448ec3c4767a37bf1a4ad7dac66e8e0/tschumacher},
doi = {10.1145/2783258.2783307},
file = {Tang et al - PTE ~ Predictive text embedding through Large-scale Heterogeous Text Networks.pdf:C\:\\Users\\Admin\\Documents\\Research\\_Paperbase\\Graph Embeddings\\Tang et al - PTE ~ Predictive text embedding through Large-scale Heterogeous Text Networks.pdf:application/pdf},
interhash = {3469e41453f3c7cb32b4b96052ee2c2e},
intrahash = {5448ec3c4767a37bf1a4ad7dac66e8e0},
journal = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15},
keywords = {Embedding_Algorithm Node_Embeddings Skip-Gram Word_Embeddings},
language = {en},
note = {arXiv: 1508.00200},
pages = {1165--1174},
shorttitle = {{PTE}},
timestamp = {2020-02-21T16:09:44.000+0100},
title = {{PTE}: {Predictive} {Text} {Embedding} through {Large}-scale {Heterogeneous} {Text} {Networks}},
url = {http://arxiv.org/abs/1508.00200},
urldate = {2020-02-02},
year = 2015
}