Dr. Antonio Vergari
Reader (Associate Professor), University of Edinburgh
Speaker: Dr. Antonio Vergari Date: 18-02-2025 2pm-3pm (BST) Location: Mathematical Sciences Building, MB0.08, University of Warwick, Coventry, UK
Subtractive Mixture Models: Representation, Learning and Inference
Abstract
Mixture models are traditionally represented and learned by adding several distributions as components. Allowing mixtures to subtract probability mass or density can drastically reduce the number of components needed to model complex distributions. However, learning such subtractive mixtures while ensuring they still encode a non-negative function is challenging. We investigate how to learn and perform inference on deep subtractive mixtures by squaring them. We do this in the framework of probabilistic circuits, which enables us to represent tensorized mixtures and generalize several other subtractive models such as positive semi-definite kernel models and Born machines. We theoretically prove that the class of sum of squared circuits allowing subtractions can be exponentially more expressive than traditional additive mixtures. We empirically show this increased expressiveness on a series of real-world distribution estimation tasks and discuss which inference scenarios are tractable with this new class of circuits. Finally, I will talk about how to use these subtractive mixtures for approximate inference when plugged in monte carlo and importance sampling estimators.
About Dr. Antonio Vergari
Antonio Vergari loves probabilistic machine learning and equally loves to tease the probabilistic machine learning community (much easier with the verification community these days) on how we desperately need efficient and reliable machine learning systems. He would like to find a unifying framework for complex reasoning. Many days he believes this can be done with some form of circuits.