Invited Talks and Tutorials

Invited Speakers

The chairs of PGM 2016 are pleased to announce the following invited speakers.

Guido Consonni

Bayesian Selection of Essential Graphical Models

2015-set 1 small Guido Consonni is Full Professor of Statistics at Università Cattolica del Sacro Cuore, Milan, Italy. His current research topics are in the area of: model selection, graphical models and Bayesian networks, objective Bayesian analysis, models for high-dimensional data, hierarchical models. He is the author of over 50 papers and has published among others in: Annals of Statistics, J Royal. Statistical. Soc. B, Biometrika, J. American. Statistical. Assoc., Statistical Science, Scandinavian J. Statistics, Journal of Multivariate Analysis, Statistica Sinica. He is Chair of the Objective Bayes Section of the International Society for Bayesian Analysis (ISBA) (2016-2018).

Fabio Cozman

The Finite Model Theory of Bayesian Networks: An Invitation

Finite model theory studies the connection between the syntax of formal (usually logical) languages and their finite semantics. A similar study is possible when we deal with Bayesian networks, particularly when we look at their relational variants. For instance: What is the connection between a specification language and the complexity of inferences? How hard is it to verify that a specification in fact defines a distribution? How to quantify the expressivity and the learnability of specification languages? If a language
allows the size of networks to grow, can we prove convergence for probabilities? Results on these issues are somewhat scattered through the literature. We propose here a unified take on the finite model theory of Bayesian networks; a complete synthesis would be very useful in guiding the development of representation formalisms, and also a major theoretical accomplishment.

unnamed Fabio G. Cozman is Full Professor at Universidade de São Paulo, Brazil, where he works with probabilistic reasoning and machine learning, with a special interest in formalisms that extend probability theory. He got a PhD in Robotics at Carnegie Mellon University, and has served, among other activities, as Program and General Chair of the Conf. on Uncertainty in Artificial Intelligence, Area Chair (Machine Learning track) of the Int. Joint Conf. on Artificial Intelligence, and as member of the Editorial Board of the Artificial Intelligence Journal, as well as Associate Editor of the Journal of Artificial Intelligence Research and of the Journal of Approximate Reasoning.

Adnan Darwiche

Learning Arithmetic Circuits with Background Knowledge

Over the past few decades, various approaches have been introduced for learning probabilistic models, depending on whether the examples are labeled or unlabelled, and whether they are complete or incomplete. In this talk, I will introduce an orthogonal class of machine learning problems, which have not been treated as systematically before. In these problems, one has access to Boolean constraints that characterize examples which are known to be impossible (e.g., due to known domain physics). The task is then to learn a tractable probabilistic model that is guaranteed to assign a zero probability to each impossible example.

I will describe a new class of Arithmetic Circuits, the PSDD, for addressing this class of learning problems. The PSDD is based on advances from both machine learning and logical reasoning and can be learned under Boolean constraints. I will also provide a number of results on learning PSDDs. First, I will contrast PSDD learning with approaches that ignore known constraints, showing how it can learn more accurate models. Second, I will show that PSDDs can be utilized to learn, in a domain-independent manner, distributions over combinatorial objects, such as rankings, game traces and routes on a map. Third, I will show how PSDDs can be learned from a new type of datasets, in which examples are specified using arbitrary Boolean expressions. A number of preliminary case studies will be illustrated throughout the talk, including the unsupervised learning of preference rankings and the supervised learning of classifiers for routes and game traces.

darwiche Adnan Darwiche is a professor and former chairman of the computer science department at UCLA. He earned his PhD and MS degrees in computer science from Stanford University in 1993 and 1989, respectively. His research focuses on the theory and practice of symbolic and probabilistic reasoning, with recent applications to machine learning. Professor Darwiche served as editor-in-chief of the Journal of Artificial Intelligence Research (JAIR) and is a AAAI fellow. He is also author of “Modeling and Reasoning with Bayesian Networks,” published by Cambridge University Press in 2009.

Tutorials

Cassio P. de Campos and Johan Kwisthout

Computational Complexity of Bayesian Networks and Markov Random Fields

Download slides and lecture note.

Computations such as evaluating posterior probability distributions and finding joint value assignments with maximum posterior probability are of great importance in practical applications of probabilistic graphical models. These computations, however, are intractable in general, both when the results are computed exactly and when they are approximated. In order to successfully apply probabilistic graphical models in practical situations, it is crucial to understand what does and what does not make such computations (exact or approximate) hard. In this tutorial we give an overview of the necessary theoretical concepts, such as probabilistic Turing machines, oracles, and approximation strategies, and we will guide the audience through some of the most important computational complexity proofs. After the tutorial the participants will have gained insight about the boundary between ‘tractable’ and ‘intractable’ in Bayesian networks and Markov Random Fields.

c.decampos Cassio P. de Campos is a Reader with Queen’s University Belfast, UK. He obtained his PhD in 2006 with a thesis about algorithms and computational complexity in Bayesian and credal networks and his Habilitation in 2013 related to the same topics, both from the University of Sao Paulo, Brazil. He works with probabilistic graphical models, imprecise probability, computational complexity and bioinformatics. He serves as committee member of multiple conferences in machine learning and artificial intelligence, and is currently an at-large member in the executive committee of the Society for Imprecise Probability.

tutorial_johan_kwisthout Johan Kwisthout is a senior staff member in the Artificial Intelligence Program at Radboud University, and a postdoc in the Donders Center for Cognition. He obtained his PhD in 2009 with a thesis describing the computational complexity of numerous problems in Bayesian networks and continued to publish in the field. He currently co-teaches an undergraduate course on Bayesian networks and machine learning, and is developing a graduate course on the theoretical foundations of (Bayesian) inference to the best explanation. He taught a mini-course in computational complexity in cognitive models at the “IK 2013” spring school in AI and cognitive science and is currently co-authoring a textbook (to be published with Cambridge University Press) on this topic.

Nicola Di Mauro and Antonio Vergari

Learning Sum-Product Networks

Slides available.

Inference represents the hardest part of learning Probabilistic Graphical Models (PGMs) since it is the core sub-routine of learning. Learning complex and expressive models relies on using approximate inference methods often leading to an inaccurate density estimators. The need for feasible inference in PGMs has lead to tractable models like Sum-Product Networks (SPNs). SPNs encode probability distribution functions into a structure that provides exact inference. They are potentially deep networks having sum nodes with weighted inputs, product nodes, and inputs as functions of the modeled random variables. The tutorial audience will gain an overview on what SPNs are, how you could use them and why you should. SPNs will be introduced along with their properties, capabilities and expressive power. Their relations to classical PGMs and other tractable density estimators will be highlighted. A special focus will be on state-of-the-art algorithms for learning both their parameters and structure. In the end, a more practical perspective will be depicted by the recent applications of SPNs in the literature.

This tutorial is an ECCAI invited talk supported by the ECCAI conference sponsorship program.

ndm Nicola Di Mauro is an Assistant Professor in the Department of Computer Science at the University of Bari “Aldo Moro”, Italy. He received his Ph.D. from the University of Bari, in 2005. His research interests are in artificial intelligence, machine learning, inductive logic programming, learning with probabilistic graphical models and statistical relational learning. He serves regularly as program committee member for many artificial intelligence and machine learning conferences, and he has published more than 110 peer-reviewed papers.

Antonio Vergari got the M.Sc. (Laurea) in Computer science from University of Bari in 2014. He is a PhD student at the Department of Computer Science, University of Bari, studying Probabilistic Graphical Models and Representation Learning.

Ola Svensson

Algorithms with provable guarantees for clustering (joint event with OR days)

Ola In this talk, we give an overview of the current best approximation algorithms for fundamental clustering problems, such as k-center, k-median, k-means, and facility location. We focus on recent progress and point out several important open problems.

For the uncapacitated versions, a variety of algorithmic methodologies, such as LP-rounding and primal-dual method, have been applied to a standard linear programming relaxation. This has given a uniform way of addressing these problems resulting in small constant approximation guarantees.

In spite of this impressive progress, it remains a challenging open problem to give tight guarantees. Moreover, this collection of powerful algorithmic techniques is not easily applicable to the capacitated setting. In fact, there is no simple strong convex relaxation known for the capacitated versions. As a result, our understanding of these problems is significantly weaker and several fundamental questions remain open.