If you use the code, please kindly cite the following paper:
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu. Learning Entity and Relation Embeddings for Knowledge Graph Completion. The 29th AAAI Conference on Artificial Intelligence (AAAI'15).
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
This is a list of 100 important natural language processing (NLP) papers that serious students and researchers working in the field should probably know about and read.
Stanford CoreNLP provides a set of natural language analysis tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, and mark up the structure of sentences in terms of phrases and word dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract open-class relations between mentions, etc.
The Natural Language Decathlon (decaNLP) is a new benchmark for studying general NLP models that can perform a variety of complex, natural language tasks.
ConceptNet Numberbatch consists of state-of-the-art semantic vectors (also known as word embeddings) that can be used directly as a representation of word meanings or as a starting point for further machine learning.
Wikipedia-based question answering system for natural language questions, open topic model, Wiki, Wikipedia, Knowledge Enhanced Embodied Cognitive Interaction Technology.
In this tutorial we look at the word2vec model by Mikolov et al. This model is used for learning vector representations of words, called "word embeddings".
A dependency parser analyzes the grammatical structure of a sentence, establishing relationships between "head" words and words which modify those heads.
Stanford CoreNLP provides a set of human language technology tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and syntactic dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract particular or open-class relations between entity mentions, get the quotes people said, etc.
In natural language understanding, there is a hierarchy of lenses through which we can extract meaning - from words to sentences to paragraphs to documents. At the document level, one of the most useful ways to understand text is by analyzing its topics.
I made an introductory talk on word embeddings in the past and this write-up is an extended version of the part about philosophical ideas behind word vectors.
This post is meant as a summary of many of the concepts that I learned in Marti Hearst's Natural Language Processing class at the UC Berkeley School of Information.
2020-2021 International Conferences in Artificial Intelligence, Machine Learning, Computer Vision, Data Mining, Natural Language Processing and Robotics
L. Flek. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, стр. 7828--7838. Online, Association for Computational Linguistics, (июля 2020)
S. Blodgett, S. Barocas, H. Daumé III, и H. Wallach. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, стр. 5454--5476. Online, Association for Computational Linguistics, (июля 2020)
M. Peters, M. Neumann, R. Logan, R. Schwartz, V. Joshi, S. Singh, и N. Smith. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), стр. 43--54. Hong Kong, China, Association for Computational Linguistics, (ноября 2019)
S. Bordia, и S. Bowman. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, стр. 7--15. Minneapolis, Minnesota, Association for Computational Linguistics, (июня 2019)