A very common workflow is to index some data based on its embeddings and then given a new query embedding retrieve the most similar examples with k-Nearest Neighbor search. For example, you can imagine embedding a large collection of papers by their abstracts and then given a new paper of interest retrieve the most similar papers to it.
TLDR in my experience it ~always works better to use an SVM instead of kNN, if you can afford the slight computational hit
H. Kim, J. Choi, D. Choi, H. Choi, и P. Kim. Proceedings of the 2012 ACM Research in Applied Computation Symposium, стр. 310--315. New York, NY, USA, ACM, (2012)
T. Joachims. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, стр. 133--142. New York, NY, USA, ACM, (2002)
X. Li, B. Liu, и S. Ng. Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, стр. 218--228. Stroudsburg, PA, USA, Association for Computational Linguistics, (2010)
T. Joachims. Proceedings of ICML-99, 16th International Conference on Machine Learning, стр. 200--209. Bled, SL, Morgan Kaufmann Publishers, San Francisco, US, (1999)
B. Lauser, и A. Hotho. Proc. of the 7th European Conference in Research and Advanced Technology for Digital Libraries, ECDL 2003, том 2769 из LNCS, стр. 140-151. Springer, (2003)