Computing semantic relatedness using Wikipedia-based explicit semantic analysis
E. Gabrilovich, and S. Markovitch. Proceedings of the 20th International Joint Conference on Artificial Intelligence, 7, page 1606--1611. (2007)
Abstract
Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from <i>r</i> = 0.56 to 0.75 for individual words and from <i>r</i> = 0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.
%0 Conference Paper
%1 gabrilovich2007computing
%A Gabrilovich, Evgeniy
%A Markovitch, Shaul
%B Proceedings of the 20th International Joint Conference on Artificial Intelligence
%D 2007
%K diss eva21 inthesis proposal semantics
%P 1606--1611
%T Computing semantic relatedness using Wikipedia-based explicit semantic analysis
%V 7
%X Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from <i>r</i> = 0.56 to 0.75 for individual words and from <i>r</i> = 0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.
@inproceedings{gabrilovich2007computing,
abstract = {Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from <i>r</i> = 0.56 to 0.75 for individual words and from <i>r</i> = 0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.},
added-at = {2015-10-27T14:41:06.000+0100},
author = {Gabrilovich, Evgeniy and Markovitch, Shaul},
biburl = {https://www.bibsonomy.org/bibtex/2ac8dfa90bd30a5aa5ba28026c884f4cf/becker},
booktitle = {Proceedings of the 20th International Joint Conference on Artificial Intelligence},
interhash = {5baf6af4bf58cf3926b39a12edb35e58},
intrahash = {ac8dfa90bd30a5aa5ba28026c884f4cf},
keywords = {diss eva21 inthesis proposal semantics},
pages = {1606--1611},
timestamp = {2017-07-10T10:50:48.000+0200},
title = {Computing semantic relatedness using Wikipedia-based explicit semantic analysis},
volume = 7,
year = 2007
}