In natural language understanding, there is a hierarchy of lenses through which we can extract meaning - from words to sentences to paragraphs to documents. At the document level, one of the most useful ways to understand text is by analyzing its topics.
I made an introductory talk on word embeddings in the past and this write-up is an extended version of the part about philosophical ideas behind word vectors.
S. Bordia, and S. Bowman. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, page 7--15. Minneapolis, Minnesota, Association for Computational Linguistics, (June 2019)
L. Hettinger, A. Zehe, A. Dallmann, and A. Hotho. INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik – Informatik für Gesellschaft, page 191-204. Bonn, Gesellschaft für Informatik e.V., (2019)
M. Artetxe, G. Labaka, I. Lopez-Gazpio, and E. Agirre. Proceedings of the 22nd Conference on Computational Natural Language Learning, page 282--291. Association for Computational Linguistics, (2018)
M. Kusner, Y. Sun, N. Kolkin, and K. Weinberger. Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, page 957--966. JMLR.org, (2015)