Tutorials will be held on August 3rd 2020 (for some timezones on the morning August 4th) via Zoom.

Markus Reichstein (Aug 3rd 16:00-18:45 UTC+2)

Tutorial: Machine learning for Earth System Science - Overview and case studies. Zoom link.

Abstract: This tutorial will give an overview about how Machine learning can support Earth System Science. First I present an overview of the key challenges in this field of science, which addresses the complex interplay between e.g. hydrosphere, biosphere, atmosphere and cryosphere, with emphasis on the carbon cycle and climate feedbacks. This will be complemented by four examples on 1) how to infer global carbon fluxes from sparse observations, 2) how to quantify uncertainties therein including extrapolation, 3) how to model landscapes, i.e. the spatial arrangement of elements, 4) how address dynamic effects as expressed in time-series and spatio-temporal data.

About Markus

Markus Reichstein is Director of the Biogeochemical Integration Department at the Max-Planck-Institute for Biogeochemistry. His main research interests revolve around the response and feedback of ecosystems (vegetation and soils) to climatic variability with a Earth system perspective, considering coupled carbon, water and nutrient cycles. Of specific interest is the interplay of climate extremes with ecosystem and societal resilience. These topics are adressed via a model-data integration approach, combining data-driven machine learning with systems modelling of experimental, ground- and satellite-based observations.

Since 2013 Markus Reichstein is Professor for Global Geoecology at the FSU Jena, and founding Director at the Michael-Stifel-Center Jena for Data-driven and Simulation Science. He has been serving as lead author of the IPCC special report on Climate Extremes (SREX), as member of the German Commitee Future Earth on Sustainability Research, and the Thuringian Panel on Climate. Recent awards include the Piers J. Sellers Mid-Career Award by the American Geophysical Union (2018), an ERC Synergy Grant (2019) and the Gottfried Wilhelm Leibniz Preis (2020).

Doina Precup (Aug 3rd 20:00-22:45 UTC+2)

Tutorial: Off-policy reinforcement learning and its applications. Zoom link.

Abstract: Reinforcement learning agents aim to learn a policy, ie a way of behaving, from interaction with an environment. But in practice, there can be limits on the type of interaction allowed. For example, agents may not be able to gather data interactively due to safety constraints, or they may need to leverage batch data that has alreeady been collected. Another important case is that of learning optimal control, in which an agent has to follow an exploratory policy but is interested in finding and evaluation an optimal policy. Finally, we might want to use a single stream of data in order to learn about many different things. Off-policy reinforcement learning represents a very broad class of methods that allow an agent to learn about a desired, target policy, based on data collected using a different way of behaving. In this tutorial, we will review the theoretical ideas underpinning off-policy learning and discuss state-of-art algorithms that rely on off-policy learning. We will highlight practical applications of off-policy learning algorithms, and also discuss limitations of current methods and open problems.

About Doina

Doina Precup splits her time between McGill University/MILA, where she holds a Canada-CIFAR AI chair, and DeepMind Montreal, where she has led the research team since its formation in October 2017. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control, and other fields. Dr. Precup is also involved in activities supporting the organization of the Montreal, Quebec and Canadian AI ecosystems.

John Duchi (Aug 4th 01:00-03:45 UTC+2)

Tutorial: An Overview of Distributionally Robust Optimization. Zoom link. Slides.

In this tutorial, I will give a three part overview of distributionally robust optimization, which considers optimization problems with uncertainty and ambiguity in the underlying data-generating mechanism, and attempt to connect it to concerns in machine learning and statistics. The tutorial will consist of three main facets. The first will be an overview of the optimization perspective on robustness, which attempts to give certificates of performance and related guarantees. The second will look at work in statistical machine learning, where we will connect robustness concerns with model performance--essentially orthogonal to familiar robust statistics--and make connections to performance on rare sub-populations, uniformity, and protections against poor tail behavior. In the third, I will make some connections to validity in predictions for statistical models, with suggestions for what I believe are important future directions.

About John

John Duchi is an assistant professor of Statistics and Electrical Engineering and (by courtesy) Computer Science at Stanford University. His work spans statistical learning, optimization, information theory, and computation, with a few driving goals. (1) To discover statistical learning procedures that optimally trade between real-world resources---computation, communication, privacy provided to study participants---while maintaining statistical efficiency. (2) To build efficient large-scale optimization methods that address the spectrum of optimization, machine learning, and data analysis problems we face, allowing us to move beyond bespoke solutions to methods that robustly work. (3) To develop tools to assess and guarantee the validity of---and confidence we should have in---machine-learned systems.

He has won several awards and fellowships. His paper awards include the SIAM SIGEST award for "an outstanding paper of general interest" and best papers at the Neural Information Processing Systems conference, the International Conference on Machine Learning, and an INFORMS Applied Probability Society Best Student Paper Award (as advisor). He has also received the Society for Industrial and Applied Mathematics (SIAM) Early Career Prize in Optimization, an Office of Naval Research (ONR) Young Investigator Award, an NSF CAREER award, a Sloan Fellowship in Mathematics, the Okawa Foundation Award, the Association for Computing Machinery (ACM) Doctoral Dissertation Award (honorable mention), and U.C. Berkeley's C.V. Ramamoorthy Distinguished Research Award.