In this talk, Ramin Hasani and Daniela Rus from MIT introduce Liquid Time-Constant (LTC) Networks, a novel continuous-time neural network model. These networks diverge from traditional models by constructing networks of linear first-order dynamical systems modulated via nonlinear interlinked gates, leading to stable, bounded behavior and superior expressivity within neural ordinary differential equations. The presentation emphasizes the need for new ideas in artificial intelligence, proposing that inspiration from natural systems can lead to more compact, sustainable, and explainable models. By examining neural activity in biological brains and comparing it to artificial networks, the speakers highlight the differences in representation learning and robustness. They argue that natural brains’ interaction with their environment allows for better understanding and control, which statistical machine learning often lacks due to its in-distribution nature. The LTC networks, inspired by biological processes, incorporate continuous dynamics and non-linear synaptic interactions, resulting in models that are more expressive and capable of capturing the causal structure of data. These networks show improved performance in tasks like time-series prediction and autonomous driving, demonstrating robustness to perturbations and a better understanding of the causal structure of tasks. The talk also discusses the implementation of these models using numerical differential equation solvers and the advantages of continuous-time processes over discrete ones. The LTC networks are shown to be universal approximators, capable of handling memory and explicit and implicit memory mechanisms. The speakers provide empirical evidence of LTCs outperforming advanced recurrent neural networks in various tasks and explain the theoretical underpinnings of their expressivity. They conclude by highlighting the potential of LTC networks to model real-world decision-making processes more effectively, thanks to their causal structure and robustness. The talk ends with a call to explore the intersection of neuroscience and machine learning further, leveraging the structured insights from biological systems to advance AI models.