In this video, Matthew Berman tests the DeepSeek R1 model, a large language model with 671 billion parameters, showcasing its capabilities in reasoning, coding, and problem-solving. He runs various tasks, including coding games like Snake and Tetris, and evaluates the model’s performance in logical reasoning and generating accurate code. Matthew highlights the model’s human-like thought process and efficiency, demonstrating its potential for developers and AI enthusiasts.

Matthew Berman
Not Applicable
August 12, 2025
DeepSeek R1 Model
PT15M10S