This article discusses the role of government regulation in AI ethics, emphasizing the need for a combined community and top-down approach for the development of AI systems.
This GitHub repository, titled 'GPT-Researcher' by Assafelovic, contains resources and information related to AI and machine learning, focusing on GPT models.
A captivating visualization by Financial Times that provides an in-depth understanding of how transformers work in the realm of Generative AI. It offers insights into the mechanics and intricacies of transformer architectures, showcasing the beauty of today's Large Language Models (LLMs).
An article discussing the importance of ranking models in search engines and how Weaviate, an open-source knowledge graph, has introduced a new feature allowing users to define their ranking models.
The documentation details the SubQuestionQueryEngine in the LlamaIndex library. This query engine breaks down complex queries into multiple sub-questions, which are then directed to their target query engine for execution. The responses from the sub-questions are synthesized to produce the final response.
The paper discusses the capabilities of large pre-trained language models and their limitations in accessing and manipulating knowledge. The authors introduce retrieval-augmented generation (RAG) models that combine pre-trained parametric and non-parametric memory for language generation. The study explores the effectiveness of RAG models in various NLP tasks and compares them with other architectures.
Without RAG, an LLM is only as smart as the data it was trained on. Meaning, LLMs can only generate text based purely on what its “seen”, rather than pull in new information after the training cut-off. Sam Altman stated “the right way to think of the models that we create is a reasoning engine, not a fact database.” Essentially, we should only use the language model for its reasoning ability, not for the knowledge it has.