This talk explores the integration of Knowledge Graphs (KGs) and Large Language Models (LLM) to harness their combined power for improved natural language understanding. By leveraging KGs' structured knowledge and language models' text comprehension abilities, we can leverage the domain-specific–and potentially sensitive–data together with the general knowledge of LLMs.
We also examine how language models can enhance KGs through knowledge extraction and refinement. The integration of these technologies presents opportunities in various domains, from question-answering to chatbots, fostering more intelligent and context-aware applications.
I recently created a demo for some prospective clients of mine, demonstrating how to use Large Language Models (LLMs) together with graph databases like Neo4J.
The two have a lot of interesting interactions, namely that you can now create knowledge graphs easier than ever before, by having AI find the graph entities and relationships from your unstructured data, rather than having to do all that manually.
On top of that, graph databases also have some advantages for Retrieval Augmented Generation (RAG) applications compared to vector search, which is currently the prevailing approach to RAG.