A very common workflow is to index some data based on its embeddings and then given a new query embedding retrieve the most similar examples with k-Nearest Neighbor search. For example, you can imagine embedding a large collection of papers by their abstracts and then given a new paper of interest retrieve the most similar papers to it.
TLDR in my experience it ~always works better to use an SVM instead of kNN, if you can afford the slight computational hit
The limitations of backpropagation learning can now be overcome by using multilayer neural networks that contain top-down connections and training them to /generate/ sensory data rather than to classify it. (...) much better than previous approaches
"a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting"
In computer science, a kd-tree (short for k-dimensional tree) is a space-partitioning data structure for organizing points in a k-dimensional space. kd-trees are a useful data structure for several applications, such as searches involving a multidimensional search key (e.g. range searches and nearest neighbour searches).
R. Cañamares, and P. Castells. Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, (August 2017)
T. Rezende, and S. Almeida. Workshop of Theses and Dissertations (WTD) in the 29th Conference on Graphics, Patterns and Images (SIBGRAPI'16), São José dos Campos, SP, Brazil, Universidade Federal de São Paulo (UniFeSP), (2016)