UAI 2015 - Tutorials
The tutorials will be held on July 12th, 2015 The finale schedule of the tutorials will be announced soon. Here is the list of tutorials for this year:- Optimal Algorithms for Learning Bayesian Network Structures
Changhe Yuan, James Cussens, Brandon Malone - Computational Complexity of Bayesian Networks
Johan Kwisthout, Cassio De Campos - Belief functions for the working scientist
Thierry Denoeux, Fabio Cuzzolin - Non-parametric Causal Models
Robin Evans, Thomas Richardson
Optimal Algorithms for Learning Bayesian Network Structures
Please find the slides here: part I and part 2
Abstract
Early research on learning Bayesian networks (BNs) mainly focused on developing approximation methods such as greedy hill climbing, as the problem is known to be NP-hard. In recent years, several exact algorithms have been developed that can learn optimal BNs for complete datasets with dozens or even hundreds of discrete variables. This tutorial will provide an in-depth discussion of the theoretical foundations and algorithmic developments in this important research topic, especially the recent optimal algorithms. We will start with a brief introduction of the score-based approach to learning Bayesian network structures. We will then discuss two recent approaches to exact learning algorithms: one based on admissible heuristic search, and the other based on integer linear programming. This tutorial should be of significant interest to all researchers who are interested in probabilistic graphical models and machine learning.BIO



Computational Complexity of Bayesian Networks
Please find the slides here part I and III and part II, as well as the lecture notes.
Abstract
Computations such as computing posterior probability distributions and finding joint value assignments with maximum posterior probability are of great importance in practical applications of Bayesian networks. These computations, however, are intractable in general, both when the results are computed exactly and when they are approximated. In order to successfully apply Bayesian networks in practical situations, it is crucial to understand what does and what does not make such computations (exact or approximate) hard. In this tutorial we give an overview of the necessary theoretical concepts, such as probabilistic Turing machines, oracles, and approximation strategies, and we will guide the audience through some of the most important computational complexity proofs. After the tutorial the participants will have gained insight in the boundary between 'tractable' and 'intractable' in Bayesian networks.BIO


Belief functions for the working scientist
Please find the slides here
Abstract
The theory of belief functions, sometimes referred to as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, to be later developed by Glenn Shafer as a general framework for modelling epistemic uncertainty. The methodology is now well established as a general framework for reasoning with uncertainty, with well-understood connections to related frameworks such as probability, possibility, random set and imprecise probability theories. Importantly, in recent years the number of papers published on the theory and application of belief functions has been booming (reaching over 800 in 2014 alone), displaying strong growth in particular in the East Asian community and among practitioners working on multi-criteria decision making, earth sciences, and sensor fusion. Belief functions are a natural tool to cope with heavy uncertainty, lack of evidence and missing data, and extremely rare events. An early debate on the rationale of belief functions gave a strong contribution to the growth and success of the UAI community and series of conference in the Eighties and Nineties, thanks to the contribution of scientists of the caliber of Glenn Shafer, Judea Pearl, Philippe Smets and Prakash Shenoy, among others. Ever since the UAI and BELIEF community have somewhat diverged, and the proposers’ effort has been recently directed towards going back to a closer relationships and exchange of ideas between the two communities. This was one of the aims of the recent BELIEF 2014 International Conference of which the proposers were General Chair and member of the Steering Committee, respectively. A number of books are being published on the subject as we speak, and the impact of the belief function approach to uncertainty is growing. The proposed tutorial aims at bridging the gap between researchers in the field and the wider AI and Uncertainty Theory community, with the longer term goal of a more fruitful collaboration and dissemination of ideas.BIO


Non-parametric Causal Models
Please find the slides here part I and part II.
Abstract
Causality and the learning of statistical models from data are foundational ideas in statistics. In this tutorial we will give an overview of some modern non-parametric approaches to causal inference, using Bayesian networks with hidden variables.We will begin with discussion of Pearl's do-calculus, Tian's algorithm for checking whether causal effects are identified, and estimation. The 'identification problem' solved by Tian's algorithm is intimately related to generalized conditional independence constraints implied by Bayesian networks with hidden variables. These Verma constraints, discovered independently by Robins and later Verma and Pearl, inspired the nested Markov model for non-parametric causal modelling.
We will define the nested model via a `fixing' operation; this corresponds to identifiable and causally interpretable reweightings in a DAG with hidden variables, and can be used to simplify the identification problem. The first half will finish by showing how constraints can be read from a graph using fixing.
In the second part the fixing operation is used to parameterize nested Markov models, and we present some algorithms for fitting these models to data. The discrete nested model is shown, in an algebraic sense, to be a very good approximation to the latent variable model under study. It is also much easier to analyse, and we use it to give some smoothness results for Bayesian networks with hidden variables. Applications to structure searches and model equivalence will be considered.
Finally, Nested models fail to account for inequality constraints such as Bell's inequalities or Pearl's Instrumental Inequality. We will see that recent results give systematic methods for deriving inequality constraints from Bayesian networks with hidden variables, and show that every missing edge in a non-parametric causal graphical model corresponds to a testable constraint.
BIO
Robin Evans is an Associate Professor in Statistics at the University of Oxford. He has worked on mixed graphical models to represent causal systems in the presence of unmeasured confounding. This includes a first graphical method for deriving general inequality constraints, and a derivation of the maximal dimension of non-parametric hidden variable models.
Thomas Richardson is Professor and Chair, of the Department of Statistics at the University of Washington. Amongst other things he has worked on causal models with cycles and feedback, ancestral graphs for selection and confounding, and acyclic directed mixed graphs. Richardson presented a tutorial at NIPS in December 2014 on approaches to unifying the graphical and counterfactual approaches to causal modelling via Single World Intervention Graphs (SWIGs), introduced in. (There will be no overlap in content with the proposed UAI tutorial.) He has delivered short courses on graphical and causal modeling in various places including: University of Helsinki; University of Umea; ETH Zurich; University of Washington, Seattle; Universidad Torcuato di Tella, Buenos Aires; NIPS 2014.