UAI 2018 - Workshops

August 10th, 2018

Safety, Risk and Uncertainty in RL

Website: https://sites.google.com/view/rl-uai2018/
Organizers: Emma Brunskill (Stanford), Audrey Durand (McGill), Vincent François (McGill), Daniel (Zhaohan) Guo (CMU), Joelle Pineau (McGill), Guillaume Rabusseau (McGill)

In sequential decision-making tasks, maximizing objectives represented by a cumulative reward function is, in some cases, not the unique goal of reinforcement learning agents. For example, it might also be important for agents to avoid uncertain outcomes in order to protect themselves and their environment. Another example is in human-agent interactions, where exhibiting a behaviour that is expected, thus not surprising to a human, is often desirable in order for people to feel comfortable and safe around the agent. The goal of this workshop is to discuss the risk and safety perspectives in reinforcement learning. More specifically they include, but are not limited to risk-awareness, safety, and robustness for:

  • exploration
  • model uncertainty (e.g. limited data)
  • environment uncertainty (e.g. noisy feedback)
  • hierarchical learning
  • transfer/meta learning
  • adversarial environments
  • human-machine interactions
  • multi-agent systems

Causal Inference Workshop

Website: https://sites.google.com/view/causaluai2018/home
Organizers: Bryant Chen (IBM), Panos Toulis (University of Chicago), Alexander Volfovsky (Duke University)

In recent years, causal inference has seen important advances, especially through a dramatic expansion in its theoretical and practical domains. By assuming a central role in decision making, causal inference has attracted interest from computer science, statistics, and machine learning, each field contributing a fresh and unique perspective. More specifically, computer science has focused on the algorithmic understanding of causality, and general conditions under which causal structures may be inferred. Machine learning methods have focused on high-dimensional models and non-parametric methods, whereas more classical causal inference has been guiding policy in complex domains involving economics, social and health sciences, and business. Through such advances a powerful cross-pollination has emerged as a new set of methodologies promising to deliver robust data analysis than each field could individually -- some examples include concepts such as doubly-robust methods, targeted learning, double machine learning, causal trees, all of which have recently been introduced. This workshop is aimed at facilitating more interactions between researchers in machine learning, statistics, and computer science working on questions of causal inference. In particular, it is an opportunity to bring together highly technical individuals who are strongly motivated by the practical importance and real-world impact of their work. Cultivating such interactions will lead to the development of theory, methodology, and - most importantly - practical tools, that better target causal questions across different domains.

Confirmed Speakers:

  • Judea Pearl (UCLA)
  • Jared Murray/Carlos Carvalho (UT Austin)
  • Frederick Eberhardt (Caltech)
  • Stefan Wager (Stanford)

Uncertainty in Deep Learning

Website: https://sites.google.com/view/udl2018/
Organizers: Andrew Wilson (Cornell), Balaji Lakshminarayanan (Google Deepmind), Dustin Tran (Columbia, Google), Matt Hoffman (Google)

Deep neural networks (DNNs) trained on large datasets can make remarkably accurate predictions. But sometimes they cannot—for example, because of limited training data, poor generalization to out-of-distribution data, or because the data is fundamentally noisy. In many applications, particularly those where predictions are driving decision-making, accurately representing this uncertainty is essential. The aim of this workshop is to foster discussion of and research into rigorous treatment of uncertainty in deep learning models.

Topics of interest include but are not limited to:

  • Calibration
  • Separation of forms of uncertainty
  • Stochastic neural networks, such as Bayesian neural networks and ensembles
  • Robustness to distribution shift
  • Inference in deep latent-variable models and generative models
  • Deep kernel learning and deep Gaussian processes
  • Active deep learning
  • Bayesian optimization
  • Applications of uncertainty-aware deep learning