There is an enigmatic and powerful mathematical operation at work inside everything from computer graphics and neural networks to quantum physics called matrix multiplication. This operation is fundamental in mathematics and appears in many computations in engineering and physics. Researchers have long sought more efficient ways to multiply matrices together. The standard algorithm for multiplying matrices, taught to students of linear algebra, involves a method that quickly becomes unwieldy for larger matrices due to its cubic time complexity. In 1969, German mathematician Volker Strassen discovered a new algorithm that reduced the number of multiplication steps for multiplying two by two matrices from eight to seven, offering significant computational savings for larger matrices. However, it was proven that six or fewer multiplications were impossible for two by two matrices, making Strassen’s algorithm the best known for over fifty years. In October 2022, researchers at Google’s DeepMind revealed a new algorithm that surpassed Strassen’s for multiplying two 4×4 matrices, discovered by an AI system called AlphaTensor. AlphaTensor, built on a reinforcement learning algorithm called AlphaZero, was trained to decompose 3D tensors into the fewest rank-1 tensors possible, representing multiplication steps. AlphaTensor quickly rediscovered Strassen’s algorithm and then went further, finding faster algorithms for various matrix sizes. Notably, AlphaTensor’s algorithm used only 47 multiplications for 4×4 matrices, compared to Strassen’s 49. This breakthrough highlights the potential for AI to assist in mathematical research, as exemplified by mathematicians Manuel Kauers and Jacob Musbauer, who improved upon AlphaTensor’s results. This collaboration between AI and human mathematicians demonstrates the potential for significant advancements in fields that rely on matrix multiplication, such as physics, engineering, and computer science.