UAI 2023 - Tutorials


Tutorials will be held on July 31st, before the main conference.

Towards Causal Foundations of Safe AI

James Fox, Tom Everitt

With great power comes great responsibility. Artificial intelligence (AI) is rapidly gaining new capabilities, and is increasingly trusted to make decisions impacting humans in significant ways (from self-driving cars to stock-trading to hiring decisions). To ensure that AI behaves in ethical and robustly beneficial ways, we must identify potential pitfalls and develop effective mitigation strategies. In this tutorial, we will explain how (Pearlian) causality offers a useful formal framework for reasoning about AI risk and describe recent work on this topic. In particular, we’ll cover: causal models of agents and how to discover them; causal definitions of fairness, intent, harm, and incentives; and risks from AI such as misgeneralization and preference manipulation, as well as how mitigation techniques including impact measures, interpretability, and path-specific objectives can help address them. [video]

Online Optimization meets Federated learning

Aadirupa Saha, Kumar Kshitij Patel

In this tutorial, we aim to cover the state-of-the-art theoretical results in (1) online and bandit convex optimization, (2) federated/distributed optimization, and (3) emerging results at their intersection. The first part of the tutorial will focus on the Online Optimization setting (especially for the adversarial model), the notion of regret, different feedback models (first-order, zeroth-order, comparisons, etc.), and analyze the performance guarantees of online gradient descent-based algorithms. The second part of the tutorial will detail the Distributed/Federated Stochastic Optimization model, discussing the data heterogeneity assumptions, local update algorithms, and min-max optimal algorithms. We will also underline the lack of results beyond the stochastic setting, i.e., in the presence of adaptive adversaries. In the final third part of the tutorial, we describe the emerging and very practical direction of Distributed Online Optimization problem. In this part, we will introduce a distributed notion of regret, followed by some recent developments studying the first, zeroth order feedback for this problem. We will conclude with many open questions, especially for distributed online optimization and underline the various applications of this framework captures. [video]

Structure Learning using Benchpress

Felix L. Rios, Giusi Moffa, Jack Kuipers

Describing the relationship between the variables in a study domain and modeling the data-generating mechanism is a fundamental problem in many empirical sciences. Probabilistic graphical models are one common approach to tackle the problem. Learning the graphical structure for such models (sometimes called causal discovery) is computationally challenging and a fervent area of current research with a plethora of algorithms being developed. To facilitate access to the different methods we present Benchpress, a scalable and platform-independent Snakemake workflow to run, develop, and create reproducible benchmarks of structure learning algorithms for probabilistic graphical models. Benchpress is interfaced via a simple JSON file, which makes it accessible for all users, while the code is designed in a fully modular fashion to enable researchers to contribute additional methodologies. Benchpress provides an interface to a large number of state-of-the-art algorithms from libraries such as BDgraph, BiDAG, bnlearn, gCastle, GOBNILP, pcalg, scikit-learn, and TETRAD as well as a variety of methods for data generating models and performance evaluation. Alongside user-defined models and randomly generated datasets, the workflow also includes a number of standard datasets and graphical models from the literature. In this tutorial, the attendees will be shown how to use Benchpress in practice. [video]

Data Compression with Machine Learning

Karen Ullrich, Yibo Yang, Stephan Mandt

The efficient communication of information is an application with enormous societal and environmental impact, and stands to benefit from the machine learning revolution seen in other fields. Through this tutorial, we hope to disseminate the ideas of information theory and compression to a broad audience, overview the core methodologies in learning-based compression (i.e., neural compression), and present the relevant technical challenges and open problems defining a new frontier for probabilistic machine learning. [video]

Causal Representation Learning

Dhanya Sridhar, Jason Hartford

Causal Representation Learning (CRL) is an emerging area of research that seeks to address an important gap in the field of causality: how can we learn causal models and mechanisms without direct measurements of all the variables? To this end, CRL combines recent advances in machine learning withnew assumptions that guarantee that causal variables can be identified up to some indeterminacies from low-level observations such as text, images or biological measurements. In this tutorial, we will review the broad classes of assumptions driving CRL. We strive to build strong intuitions about the core technical problems underpinning CRL and draw connections across different results. We will conclude the tutorial by discussing open questions for CRL, motivated by the kind of methods we would need if we wanted to extend causal models to scientific discovery. [video]





Sponsors