A captivating visualization by Financial Times that provides an in-depth understanding of how transformers work in the realm of Generative AI. It offers insights into the mechanics and intricacies of transformer architectures, showcasing the beauty of today's Large Language Models (LLMs).
The documentation details the SubQuestionQueryEngine in the LlamaIndex library. This query engine breaks down complex queries into multiple sub-questions, which are then directed to their target query engine for execution. The responses from the sub-questions are synthesized to produce the final response.
Multi-Column Markdown is a document formatting plugin for the ObsidianMD note taking application. It was created by Cameron Robinson to fill a gap in Obsidian's functionality, and it has been released as an official plugin for Obsidian.
Without RAG, an LLM is only as smart as the data it was trained on. Meaning, LLMs can only generate text based purely on what its “seen”, rather than pull in new information after the training cut-off. Sam Altman stated “the right way to think of the models that we create is a reasoning engine, not a fact database.” Essentially, we should only use the language model for its reasoning ability, not for the knowledge it has.
This article discusses the role of government regulation in AI ethics, emphasizing the need for a combined community and top-down approach for the development of AI systems.