The latest Llama-3 news reveals impressive advancements in AI models. Unlike many companies that release open models and then retreat behind paywalls, Meta and xAI continue to provide top-tier open-source models. xAI has open-sourced Grok and teased Grok 1.5 Vision, while Meta has introduced Llama-3 with groundbreaking metrics. Llama-3’s highlight is not its architecture but its training methodology. The Llama-3 series includes two open-sourced models, 8B and 70B, and a third model with 400 billion parameters is in development. Both the pre-trained and instruct versions of these models were evaluated on various benchmarks, showing impressive results. The instruct model is close to GPT-4 Turbo and CLA 3 in performance, needing only minor improvements in coding and mathematical skills. Meta’s training approach, which involves training beyond the ‘Chinella optimal’ with 15 trillion tokens for the 8B model, has proven that smaller models can learn beyond traditional limits. This extensive training has resulted in the 8B model outperforming larger models like Mixr and GPT-3.5 Turbo. Meta’s decision to train on high-quality data, including 10 million human-annotated examples, has significantly enhanced the model’s reasoning capabilities. Despite the high cost of R&D, Meta’s open-source strategy aims to build an ecosystem that can optimize and run models more cheaply, potentially saving billions in the long run. The video also mentions Meta’s integration of Llama-3 into its platform, offering web browsing and image generation capabilities. The open-source approach, despite its high costs, is seen as a way to foster innovation and competition in the AI landscape.