The Foundations of AI Seminar Series is dedicated to topics of interest in artificial intelligence, machine learning, both empirically and theoretically, as well as related areas. Our goal is for these meetings to serve as a forum for discussions and quick dissemination of results. We invite anyone interested in the latest advancements in AI/ML to join us!
Towards a theory of scaling in deep learning
Speaker: Dr. Leena Chennuru Vankadara Date: 11-02-2026 11am-12pm (BST) Location: Department of Computer Science CS1.04, University of Warwick, Coventry, UK

Abstract
Scaling computational resources is essential for advancing modern deep learning, enabling both quantitative improvements and qualitative emergence of new behaviors. This talk introduces recent theoretical progress on using scaling limits to derive principled scaling rules demonstrating their impact on stability, feature learning, and hyperparameter predictability. We further address discrepancies between infinite-width theory and practice under standard parameterization (SP), revealing a novel “controlled divergence” regime that explains empirical phenomena including stable feature evolution and learning rate scaling behavior.
About Dr. Leena Chennuru Vankadara
Leena Chennuru Vankadara is a Lecturer at the Gatsby Computational Neuroscience Unit at UCL. Previously, she was an Applied Scientist at Amazon Research. She earned her Ph.D. in Theory of Machine Learning from the International Max Planck Institute for Intelligent Systems. Her research focuses on the theory of deep learning, including scaling, generalization, and learning dynamics.