# UAI 2015 - Tutorials

The tutorials will be held on**July 12th, 2015**The finale schedule of the tutorials will be announced soon. Here is the list of tutorials for this year:

- Optimal Algorithms for Learning Bayesian Network Structures

Changhe Yuan, James Cussens, Brandon Malone - Computational Complexity of Bayesian Networks

Johan Kwisthout, Cassio De Campos - Belief functions for the working scientist

Thierry Denoeux, Fabio Cuzzolin - Non-parametric Causal Models

Robin Evans, Thomas Richardson

### Optimal Algorithms for Learning Bayesian Network Structures

Please find the slides here: part I and part 2

##### Abstract

Early research on learning Bayesian networks (BNs) mainly focused on developing approximation methods such as greedy hill climbing, as the problem is known to be NP-hard. In recent years, several exact algorithms have been developed that can learn optimal BNs for complete datasets with dozens or even hundreds of discrete variables. This tutorial will provide an in-depth discussion of the theoretical foundations and algorithmic developments in this important research topic, especially the recent optimal algorithms. We will start with a brief introduction of the score-based approach to learning Bayesian network structures. We will then discuss two recent approaches to exact learning algorithms: one based on admissible heuristic search, and the other based on integer linear programming. This tutorial should be of significant interest to all researchers who are interested in probabilistic graphical models and machine learning.##### BIO

**Changhe Yuan**is an Associate Professor of Computer Science at CUNY Queens College and the director of Uncertainty Reasoning Laboratory (URL Lab). His research focuses on developing methods and algorithms for learning, inference, and decision making under uncertainty using probabilistic graphical models. Among others, his research has led to the development of URLearning (You Are Learning!) for learning Bayesian network structures from data. He also has keen interest in interdisciplinary research and has applied his work to areas such as quantitative finance, supply chain risk analysis, and computational biology. He received his PhD from the Intelligent Systems Program at University of Pittsburgh in 2006.

**James Cussens**gained his PhD in Philosophy of Probability from King's College, London in 1991. He is currently a Senior Lecturer at the University of York where he teaches probabilistic graphical models, machine learning and philosophy, which (happily!) are also amongst his research interests. He also works on statistical relational learning (SRL). He is one of the developers of GOBNILP which uses integer programming for Bayesian network structure learning. He has been an Action Editor for Machine Learning journal since 2010, serves on the PC for most main AI/ML conferences and will be co-programme chair for ILP-2016.

**Brandon Malone**is a postdoctoral researcher at the Max Planck Institute for Biology of Ageing. Much of his research has addressed improving the scalability of exact Bayesian network structure learning using admissible heuristic search. He has also worked with interdisciplinary groups to investigate biological problems, such as processes related to ageing, using probabilistic models. He received his PhD from Mississippi State University in 2012.

### Computational Complexity of Bayesian Networks

Please find the slides here part I and III and part II, as well as the lecture notes.

##### Abstract

Computations such as computing posterior probability distributions and finding joint value assignments with maximum posterior probability are of great importance in practical applications of Bayesian networks. These computations, however, are intractable in general, both when the results are computed exactly and when they are approximated. In order to successfully apply Bayesian networks in practical situations, it is crucial to understand what does and what does not make such computations (exact or approximate) hard. In this tutorial we give an overview of the necessary theoretical concepts, such as probabilistic Turing machines, oracles, and approximation strategies, and we will guide the audience through some of the most important computational complexity proofs. After the tutorial the participants will have gained insight in the boundary between 'tractable' and 'intractable' in Bayesian networks.##### BIO

**Johan Kwisthout**is a senior staff member in the Artificial Intelligence Program at Radboud University, and a postdoc in the Donders Center for Cognition. He obtained his PhD in 2009 with a thesis describing the computational complexity of numerous problems in Bayesian networks and continued to publish in the field. He currently co-teaches an undergraduate course on Bayesian networks and machine learning, and is developing a graduate course on the theoretical foundations of (Bayesian) inference to the best explanation. He taught a mini-course in computational complexity in cognitive models at the “IK 2013” spring school in AI and cognitive science and is currently co-authoring a textbook (to be published with Cambridge University Press) on this topic.

**Cassio P. de Campos**is a Reader with Queen’s University Belfast, UK. He obtained his PhD in 2006 with a thesis about algorithms and computational complexity in Bayesian and credal networks and his Habilitation in 2013 related to the same topics, both from the University of Sao Paulo, Brazil. Author of more than 70 publications in probabilistic graphical models, imprecise probability, computational complexity and bioinformatics. He serves as program committee member of prestigious conferences in machine learning and artificial intelligence, and is currently an at-large member in the executive committee of the Society for Imprecise Probability.

### Belief functions for the working scientist

Please find the slides here

##### Abstract

The theory of belief functions, sometimes referred to as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, to be later developed by Glenn Shafer as a general framework for modelling epistemic uncertainty. The methodology is now well established as a general framework for reasoning with uncertainty, with well-understood connections to related frameworks such as probability, possibility, random set and imprecise probability theories. Importantly, in recent years the number of papers published on the theory and application of belief functions has been booming (reaching over 800 in 2014 alone), displaying strong growth in particular in the East Asian community and among practitioners working on multi-criteria decision making, earth sciences, and sensor fusion. Belief functions are a natural tool to cope with heavy uncertainty, lack of evidence and missing data, and extremely rare events. An early debate on the rationale of belief functions gave a strong contribution to the growth and success of the UAI community and series of conference in the Eighties and Nineties, thanks to the contribution of scientists of the caliber of Glenn Shafer, Judea Pearl, Philippe Smets and Prakash Shenoy, among others. Ever since the UAI and BELIEF community have somewhat diverged, and the proposers’ effort has been recently directed towards going back to a closer relationships and exchange of ideas between the two communities. This was one of the aims of the recent BELIEF 2014 International Conference of which the proposers were General Chair and member of the Steering Committee, respectively. A number of books are being published on the subject as we speak, and the impact of the belief function approach to uncertainty is growing. The proposed tutorial aims at bridging the gap between researchers in the field and the wider AI and Uncertainty Theory community, with the longer term goal of a more fruitful collaboration and dissemination of ideas.##### BIO

**Fabio Cuzzolin**is the Head of the Artificial Intelligence and Vision research group at Oxford Brookes University, Oxford, UK. Dr Cuzzolin is a world expert in uncertainty theory and belief functions theory. He worked extensively on the mathematical foundations of belief calculus. His main contribution is his geometric approach to uncertainty measures, in which uncertainty measures are represented as points of a Cartesian space and there analyzed. His work in this field is in the process of appearing in two separate monographs, published by Springer-Verlag (“The geometry of uncertainty”, to appear in 2015) and Lambert Academic Publishing (“Visions of a generalized probability theory", September 2014 ). Dr Cuzzolin was the General Chair of the 3rd International Conference on Belief Functions (BELIEF2014) held in Oxford in September 2014, and a member of the Senior Program Committee of UAI. He is Associate Editor of IEEE Trans. Fuzzy Systems and Guest Editor of the International Journal of Approximate Reasoning, and was in the editorial board of IEEE Systems Man and Cybernetics C and Information Fusion. He is member of the Board of Directors of the Belief Functions and Applications Society (BFAS). He is the author or some 90 publications. His work has won several awards in recent years. Dr Cuzzolin has an extensive teaching experience gained at Padua University, UCLA, Politecnico di Milano and Oxford Brookes in courses including Artificial Intelligence, (Advanced) Machine Vision, Mathematical Methods for Computer Vision and Machine Learning.

**Thierry Denoeux**is the President of the Belief Functions and Applications Society (BFAS), and a leading figure in the field of belief functions theory. He graduated in 1985 as an engineer from the Ecole Nationale des Ponts et Chausses in Paris, and earned a PhD from the same institution in 1989. He obtained the “Habilitation diriger des Recherches” from the Institut National Polytechnique de Lorraine in 1996. From 1989 to 1992, Prof Denoeux worked as a research scientist at the Lyonnaise des Eaux research center in Compiègne, where he was in charge of research projects concerning the application of neural networks to forecasting and process monitoring. He joined the Department of Information Processing Engineering at the Université de Technologie de Compiègne in 1992, where he is now a Full Professor (exceptional class since 2011). His research interests include statistical pattern recognition, uncertainty modeling and information fusion. Prof Denoeux is Editor in Chief of the International Journal of Approximate Reasoning, Elsevier, since January 2005. He is also AE for Fuzzy Sets and Systems and the IEEE Transactions on Fuzzy Systems, and was in the editorial board of various other journals, including the International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems (IJUFKS). He was in the Program Committee of more than 100 conferences, and is the author of 75 journal papers.

### Non-parametric Causal Models

Please find the slides here part I and part II.

##### Abstract

Causality and the learning of statistical models from data are foundational ideas in statistics. In this tutorial we will give an overview of some modern non-parametric approaches to causal inference, using Bayesian networks with hidden variables.We will begin with discussion of Pearl's do-calculus, Tian's algorithm for checking whether causal effects are identified, and estimation. The 'identification problem' solved by Tian's algorithm is intimately related to generalized conditional independence constraints implied by Bayesian networks with hidden variables. These

*Verma constraints*, discovered independently by Robins and later Verma and Pearl, inspired the

*nested Markov model*for non-parametric causal modelling.

We will define the nested model via a `fixing' operation; this corresponds to identifiable and causally interpretable reweightings in a DAG with hidden variables, and can be used to simplify the identification problem. The first half will finish by showing how constraints can be read from a graph using fixing.

In the second part the fixing operation is used to parameterize nested Markov models, and we present some algorithms for fitting these models to data. The discrete nested model is shown, in an algebraic sense, to be a very good approximation to the latent variable model under study. It is also much easier to analyse, and we use it to give some smoothness results for Bayesian networks with hidden variables. Applications to structure searches and model equivalence will be considered.

Finally, Nested models fail to account for inequality constraints such as Bell's inequalities or Pearl's Instrumental Inequality. We will see that recent results give systematic methods for deriving inequality constraints from Bayesian networks with hidden variables, and show that every missing edge in a non-parametric causal graphical model corresponds to a testable constraint.

##### BIO

**Robin Evans**is an Associate Professor in Statistics at the University of Oxford. He has worked on mixed graphical models to represent causal systems in the presence of unmeasured confounding. This includes a first graphical method for deriving general inequality constraints, and a derivation of the maximal dimension of non-parametric hidden variable models.

**Thomas Richardson**is Professor and Chair, of the Department of Statistics at the University of Washington. Amongst other things he has worked on causal models with cycles and feedback, ancestral graphs for selection and confounding, and acyclic directed mixed graphs. Richardson presented a tutorial at NIPS in December 2014 on approaches to unifying the graphical and counterfactual approaches to causal modelling via Single World Intervention Graphs (SWIGs), introduced in. (There will be no overlap in content with the proposed UAI tutorial.) He has delivered short courses on graphical and causal modeling in various places including: University of Helsinki; University of Umea; ETH Zurich; University of Washington, Seattle; Universidad Torcuato di Tella, Buenos Aires; NIPS 2014.