UAI 2022 - Tutorials


Tutorials will be held on August 1st before the main conference. For any questions, please contact the Tutorial Chairs. The list of accepted tutorials is below (times are in GMT+2).

Auditorium Room 4

Name Time

Risk-Averse Reinforcement Learning: Algorithms and Meta-Algorithms

- Bo Liu, Bo An, Yangyang Xu

Abstract, Slides

9:00am - 12:00pm UTC +2 (w/ break 10:00am - 10:30am)

Double Machine Learning: Causal Inference based on ML

- Phillip Bach, Martin Spindler

Abstract, Slides

1:30pm - 3:30pm UTC +2

Causality and Deep Learning: Synergies, Challenges & Opportunities for Research

- Yoshua Bengio, Nan Rosemary Ke

Abstract, Slides

4:00pm - 6:00pm UTC +2

Auditorium Room 5

Name Time

Verification Techniques for Probabilistic Systems and Programs

- Sebastian Junges, Joost-Pieter Katoenu

Abstract, Slides

9:00am - 12:00pm UTC +2 (w/ break 10am - 10:30am)

Graphical Models Meet Temporal Point Processes

- Debarun Bhattacharjya, Abir De, Tian Gao, and Søren Wengel Mogensen

Abstract, Slides

1:30pm - 3:30pm UTC +2

Quantifying Predictive Uncertainty Without Distributional Assumptions Via Conformal Prediction - Rina Foygel Barber

Slides

4:00pm - 6:00pm UTC +2

Abstract and slides


Title

Risk-Averse Reinforcement Learning: Algorithms and Meta-Algorithms

Author

Bo Liu, Bo An, Yangyang Xu

Abstract and slides

Recently, many research works have emerged toward single-agent and multi-agent autonomous decision-making. Many IT gurus are now building self-driving vehicles and medical robots, and the development of advanced autonomous decision-making systems is already a billion-dollar industry. These new technologies offer oversight, advanced automation, and autonomous instruments, adaptable to changing situations, knowledge, and constraints. However, introducing new technologies into our technical and social infrastructures has profound implications and requires establishing confidence in their behavior to avoid potential risks and harm. Therefore, autonomous decision-making systems’ effectiveness and broader acceptability rely on their ability to make their decisions “risk-averse,” which is also termed “risk-averse.” The ability of artificial intelligence (AI) systems to be averse to risks is a critical requirement in human-robot interaction and essential for realizing the full spectrum of AI's societal and industrial benefits. This line of work has a wide range of practical failure-costly applications such as control, robotics, e-commerce, autonomous driving, and medical treatment.

This tutorial introduces the state-of-the-art risk-averse methodologies for autonomous systems by centering around the following questions (1) What exactly is the risk, and what are the mathematical formulations of risk-averseness? (2) How to design the risk-averse methods? Do we need to start from scratch? Or can we use some easy tweaks to turn existing risk-oblivious algorithms into risk-averse ones?

This tutorial will introduce a wide variety of risk-averse techniques and algorithms that have been developed in recent years. Introductory material on reinforcement learning and mathematical programming (optimization) will be included in the tutorial, so there is no pre-requisite knowledge for participants. After introducing the basic mathematical framework, we will describe novel optimization methods for computing duality, block coordinate ascent, and information-theoretical lower bounds. In the end, we will highlight many opportunities for future work in this area, including exciting new domains and fundamental theoretical and algorithmic challenges.

Slides could be found here.


Title

Double Machine Learning: Causal Inference based on ML

Author

Phillip Bach, Martin Spindler

Abstract and slides

Machine learning is frequently used for predicting outcome variables. But in many cases, we are interested in causal questions: Why do customers churn? What is the effect of a price change on sales? How can we evaluate an A/B test?

This tutorial serves as an introduction to causal machine learning with a focus on the Double Machine Learning (DML) approach by Chernozhukov et al. (2018). In the first part of the tutorial, a general overview on causal machine learning is provided together with a short introduction to causal inference. We will briefly outline the reasons that prevent an out-of-the-box use of common predictive ML methods for causal analyses and interactively illustrate the key ingredients of the DML approach in a simulated data example. In the second part of the tutorial, an introduction to the DoubleML package for Python (https://docs.doubleml.org) is provided. We will demonstrate the use of DoubleML in hands-on examples in the context of program evaluation, A/B tests and demand estimation. We conclude the tutorial with a discussion of potential extensions of the DoubleML package and provide a space for open discussions and exchange between participants.

Slides could be found here.


Title

Causality and Deep Learning: Synergies, Challenges & Opportunities for Research

Author

Yoshua Bengio, Nan Rosemary Ke

Abstract and slides

Deep neural networks have achieved outstanding success in many tasks ranging from computer vision, to natural language processing, and robotics. However such models are still pale in their ability to understand the world around us, as well as generalizing and adapting to new tasks or environments. One possible solution to this problem are models that comprehend causality, such models can reason about the connections between causal variables and the effect of intervening on them. However, existing causal algorithms are typically not scalable nor applicable to highly nonlinear settings, and they also assume that the causal variables are meaningful and given. Recently, there has been an increased interest and research activity at the intersection of causality and deep learning in order to tackle the above challenges, which use deep learning for the benefit of causal algorithms and vice versa. This tutorial is aimed at introducing the fundamental concepts of causality and deep learning for both audiences, providing an overview of recent works, as well as present synergies, challenges and opportunities for research in both fields.

Slides could be found here.


Title

Verification Techniques for Probabilistic Systems and Programs

Author

Sebastian Junges, Joost-Pieter Katoenu

Abstract and slides

The most basic verification question is can a system reach an error state, and if so, how? Conceptually, system behavior can be described by paths through annotated and potentially infinite directed graphs. Finding a bug thus corresponds to a planning problem. The verification community traditionally has a particular interest in proving the absence of such unsafe plans and thus developed a set of dedicated techniques for that purpose. Verifying systems with probabilistic uncertainty, often represented as Markov decision processes (MDPs), has received plenty of attention, with a wide variety of methods and mature software tool support. Verifying MDPs with millions or even billions of states can be done quite efficiently.

The first part of this tutorial will survey key verification techniques for MDPs. We discuss the verification of a broad range of inference queries that may be imposed on a probabilistic model. Besides presenting the prime algorithmic verification techniques, we will demonstrate how to apply them using the state-of-the-art probabilistic model checker Storm. The second part of this tutorial will focus on two exciting recent research efforts within the verification community: (1) How to reason symbolically about systems with probabilistic uncertainty? and (2) How to reason when probabilities in MDPs are no longer precisely known? The symbolic reasoning part will be directly linked to probabilistic programs, whereas the uncertain probability part will be shown to provide new means and new results for controller synthesis in partially observable MDPs.

Slides could be found here.


Title

Graphical Models Meet Temporal Point Processes

Author

Debarun Bhattacharjya, Abir De, Tian Gao, and Søren Wengel Mogensen

Abstract and slides

Datasets involving interactions between asynchronously occurring events have become common in various domains, sparking an interest in methods for their modeling and analysis. For instance, one may be interested in modeling alarms in a complex engineered system where the onset of an alarm in one subsystem affects the rate of occurrence of other alarm types in the future. Other domains where event datasets are often available include neuroscience, social networks, manufacturing processes, retail, healthcare, politics, and finance. Event datasets can be modeled as multivariate temporal point processes (TPPs), which associate every type of event with a history-dependent conditional intensity rate. In this tutorial, we discuss topics at the intersection of graphical models and TPPs. Specifically, we present some underlying theory around graphical event models (GEMs) -- also known as local independence graphs -- as well as practical machine learning approaches for learning GEMs from event datasets. We also describe applications around TPP models involving network data, such as information diffusion and recommendation systems.

Slides could be found here.


Title

Quantifying Predictive Uncertainty Without Distributional Assumptions Via Conformal Prediction

Author

Rina Foygel Barber

Abstract and slides

TBA

Slides could be found here.





Sponsors