In this episode of Machine Learning Street Talk, hosts discuss a groundbreaking paper, “What’s the Magic Word? A Control Theory of LLM Prompting,” with authors Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto. The paper frames large language models (LLMs) as discrete stochastic dynamical systems and applies control theory to understand and manipulate their outputs. The researchers explore the ‘reachable set’ of outputs for an LLM, demonstrating that prompt engineering can significantly influence LLM behavior. They discuss the surprising flexibility of LLMs and the potential for adversarial prompts to drastically alter outputs. The conversation covers the theoretical and empirical aspects of their research, including the development of a control theory framework and experiments demonstrating the controllability of LLMs. The researchers also delve into broader topics such as collective intelligence, biomimetic intelligence, and the potential for decentralized AI systems. They introduce the Society for the Pursuit of AGI, a student organization aimed at exploring innovative ideas in AI. The episode concludes with reflections on the challenges of the peer review process and future research directions.