A very common workflow is to index some data based on its embeddings and then given a new query embedding retrieve the most similar examples with k-Nearest Neighbor search. For example, you can imagine embedding a large collection of papers by their abstracts and then given a new paper of interest retrieve the most similar papers to it.
TLDR in my experience it ~always works better to use an SVM instead of kNN, if you can afford the slight computational hit
M. Collins, and N. Duffy. Advances in Neural Information Processing Systems 14 --- Proceedings of the 2001 Neural Information Processing Systems Conference (NIPS 2001), December 3-8, 2001, Vancouver, British Columbia, Canada, page 625--632. MIT Press, Cambridge, MA, USA, (2002)
T. Joachims. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 133--142. New York, NY, USA, ACM, (2002)
E. Shutova. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, page 1029--1037. Stroudsburg, PA, USA, Association for Computational Linguistics, (2010)