In this video, the Weaviate team introduces the latest version of Verba, an open-source Retrieval Augmented Generation (RAG) application. Verba 1.0 now integrates with the Ollama model, enabling users to run large language models (LLMs) locally and for free. The video explains the benefits of RAG systems, such as reduced hallucinations, ease of updating data, and increased transparency. Verba allows users to customize the UI, cache conversations, and view the context used by the LLM. The team demonstrates Verba’s application in the healthcare domain and for company-specific data, showcasing its ability to handle sensitive information and generate accurate responses. Installation options include using pip, cloning the open-source repo, or using Docker. The video provides a step-by-step guide to setting up Verba and configuring it with the Ollama model. The team encourages viewers to contribute to the open-source project and explore career opportunities at Weaviate.

Weaviate • Vector Database
Not Applicable
June 12, 2024
GitHub: Verba