In this video, the host from code_your_own_AI discusses the effectiveness of commercially available legal AI RAG (Retrieval-Augmented Generation) systems, evaluated by Stanford University. The video begins by addressing a comment from a previous video about the reliability of legal AI systems that claim to have no hallucinations. The host references a study by Stanford and Yale University that evaluates three popular legal AI systems, focusing on their accuracy and the presence of hallucinations. The study reveals that despite claims of being hallucination-free, these systems still produce hallucinations in a significant number of cases. The video highlights the importance of understanding the technology behind these systems and the limitations of current RAG implementations. The host explains the concept of RAG and its use in legal AI, emphasizing the role of vector stores and embeddings. The video presents the findings from Stanford’s study, showing that even the best commercial legal AI systems have accuracy rates ranging from 19% to 65%, with hallucinations occurring in 1 in 6 queries or more. The host discusses the implications of these findings, particularly in sensitive fields like law, where accuracy is crucial. The video also explores the limitations of RAG systems and the potential benefits of newer architectures like Grok Transformers, which excel in reasoning and causal logic. The host concludes by encouraging viewers to consider the technological advancements and the need for more reliable AI systems in legal and other critical domains.