Since their introduction in 2017, transformers have revolutionized Natural Language Processing (NLP) and are now finding applications across various domains of Deep Learning, including computer vision (CV), reinforcement learning (RL), Generative Adversarial Networks (GANs), Speech, and even Biology. Transformers have enabled the development of powerful language models like GPT-3 and played a pivotal role in DeepMind’s recent AlphaFold2, which addresses protein folding. In collaboration with Stanford Online, a speaker series is being conducted that delves into the intricacies of how transformers function. This series explores the diverse types of transformers and their applications in different fields. This is achieved by inviting guest lecturers who are at the forefront of transformers research across various domains.