UAI 2022  Tutorials
Tutorials will be held on August 1st before the main conference. For any questions, please contact the Tutorial Chairs. The list of accepted tutorials is below (times are in GMT+2).
Auditorium Room 4
Name  Time 

RiskAverse Reinforcement Learning: Algorithms and MetaAlgorithms  Bo Liu, Bo An, Yangyang Xu 
9:00am  12:00pm UTC +2 (w/ break 10:00am  10:30am) 
Double Machine Learning: Causal Inference based on ML  Phillip Bach, Martin Spindler 
1:30pm  3:30pm UTC +2 
Causality and Deep Learning: Synergies, Challenges & Opportunities for Research  Yoshua Bengio, Nan Rosemary Ke 
4:00pm  6:00pm UTC +2 
Auditorium Room 5
Name  Time 

Verification Techniques for Probabilistic Systems and Programs  Sebastian Junges, JoostPieter Katoenu 
9:00am  12:00pm UTC +2 (w/ break 10am  10:30am) 
Graphical Models Meet Temporal Point Processes  Debarun Bhattacharjya, Abir De, Tian Gao, and Søren Wengel Mogensen 
1:30pm  3:30pm UTC +2 
Quantifying Predictive Uncertainty Without Distributional Assumptions Via Conformal Prediction  Rina Foygel Barber

4:00pm  6:00pm UTC +2 
Abstract and slides
RiskAverse Reinforcement Learning: Algorithms and MetaAlgorithms
Bo Liu, Bo An, Yangyang Xu
Recently, many research works have emerged toward singleagent and multiagent autonomous decisionmaking. Many IT gurus are now building selfdriving vehicles and medical robots, and the development of advanced autonomous decisionmaking systems is already a billiondollar industry. These new technologies offer oversight, advanced automation, and autonomous instruments, adaptable to changing situations, knowledge, and constraints. However, introducing new technologies into our technical and social infrastructures has profound implications and requires establishing confidence in their behavior to avoid potential risks and harm. Therefore, autonomous decisionmaking systems’ effectiveness and broader acceptability rely on their ability to make their decisions “riskaverse,” which is also termed “riskaverse.” The ability of artificial intelligence (AI) systems to be averse to risks is a critical requirement in humanrobot interaction and essential for realizing the full spectrum of AI's societal and industrial benefits. This line of work has a wide range of practical failurecostly applications such as control, robotics, ecommerce, autonomous driving, and medical treatment.
This tutorial introduces the stateoftheart riskaverse methodologies for autonomous systems by centering around the following questions (1) What exactly is the risk, and what are the mathematical formulations of riskaverseness? (2) How to design the riskaverse methods? Do we need to start from scratch? Or can we use some easy tweaks to turn existing riskoblivious algorithms into riskaverse ones?
This tutorial will introduce a wide variety of riskaverse techniques and algorithms that have been developed in recent years. Introductory material on reinforcement learning and mathematical programming (optimization) will be included in the tutorial, so there is no prerequisite knowledge for participants. After introducing the basic mathematical framework, we will describe novel optimization methods for computing duality, block coordinate ascent, and informationtheoretical lower bounds. In the end, we will highlight many opportunities for future work in this area, including exciting new domains and fundamental theoretical and algorithmic challenges.
Slides could be found here.
Double Machine Learning: Causal Inference based on ML
Phillip Bach, Martin Spindler
Machine learning is frequently used for predicting outcome variables. But in many cases, we are interested in causal questions: Why do customers churn? What is the effect of a price change on sales? How can we evaluate an A/B test?
This tutorial serves as an introduction to causal machine learning with a focus on the Double Machine Learning (DML) approach by Chernozhukov et al. (2018). In the first part of the tutorial, a general overview on causal machine learning is provided together with a short introduction to causal inference. We will briefly outline the reasons that prevent an outofthebox use of common predictive ML methods for causal analyses and interactively illustrate the key ingredients of the DML approach in a simulated data example. In the second part of the tutorial, an introduction to the DoubleML package for Python (https://docs.doubleml.org) is provided. We will demonstrate the use of DoubleML in handson examples in the context of program evaluation, A/B tests and demand estimation. We conclude the tutorial with a discussion of potential extensions of the DoubleML package and provide a space for open discussions and exchange between participants.
Slides could be found here.
Causality and Deep Learning: Synergies, Challenges & Opportunities for Research
Yoshua Bengio, Nan Rosemary Ke
Deep neural networks have achieved outstanding success in many tasks ranging from computer vision, to natural language processing, and robotics. However such models are still pale in their ability to understand the world around us, as well as generalizing and adapting to new tasks or environments. One possible solution to this problem are models that comprehend causality, such models can reason about the connections between causal variables and the effect of intervening on them. However, existing causal algorithms are typically not scalable nor applicable to highly nonlinear settings, and they also assume that the causal variables are meaningful and given. Recently, there has been an increased interest and research activity at the intersection of causality and deep learning in order to tackle the above challenges, which use deep learning for the benefit of causal algorithms and vice versa. This tutorial is aimed at introducing the fundamental concepts of causality and deep learning for both audiences, providing an overview of recent works, as well as present synergies, challenges and opportunities for research in both fields.
Slides could be found here.
Verification Techniques for Probabilistic Systems and Programs
Sebastian Junges, JoostPieter Katoenu
The most basic verification question is can a system reach an error state, and if so, how? Conceptually, system behavior can be described by paths through annotated and potentially infinite directed graphs. Finding a bug thus corresponds to a planning problem. The verification community traditionally has a particular interest in proving the absence of such unsafe plans and thus developed a set of dedicated techniques for that purpose. Verifying systems with probabilistic uncertainty, often represented as Markov decision processes (MDPs), has received plenty of attention, with a wide variety of methods and mature software tool support. Verifying MDPs with millions or even billions of states can be done quite efficiently.
The first part of this tutorial will survey key verification techniques for MDPs. We discuss the verification of a broad range of inference queries that may be imposed on a probabilistic model. Besides presenting the prime algorithmic verification techniques, we will demonstrate how to apply them using the stateoftheart probabilistic model checker Storm. The second part of this tutorial will focus on two exciting recent research efforts within the verification community: (1) How to reason symbolically about systems with probabilistic uncertainty? and (2) How to reason when probabilities in MDPs are no longer precisely known? The symbolic reasoning part will be directly linked to probabilistic programs, whereas the uncertain probability part will be shown to provide new means and new results for controller synthesis in partially observable MDPs.
Slides could be found here.
Graphical Models Meet Temporal Point Processes
Debarun Bhattacharjya, Abir De, Tian Gao, and Søren Wengel Mogensen
Datasets involving interactions between asynchronously occurring events have become common in various domains, sparking an interest in methods for their modeling and analysis. For instance, one may be interested in modeling alarms in a complex engineered system where the onset of an alarm in one subsystem affects the rate of occurrence of other alarm types in the future. Other domains where event datasets are often available include neuroscience, social networks, manufacturing processes, retail, healthcare, politics, and finance. Event datasets can be modeled as multivariate temporal point processes (TPPs), which associate every type of event with a historydependent conditional intensity rate. In this tutorial, we discuss topics at the intersection of graphical models and TPPs. Specifically, we present some underlying theory around graphical event models (GEMs)  also known as local independence graphs  as well as practical machine learning approaches for learning GEMs from event datasets. We also describe applications around TPP models involving network data, such as information diffusion and recommendation systems.
Slides could be found here.
Quantifying Predictive Uncertainty Without Distributional Assumptions Via Conformal Prediction
Rina Foygel Barber
TBA
Slides could be found here.