UAI 2018 - Tutorials

Tackling Data Scarcity in Deep Learning

Anima Anandkumar (Caltech and Amazon AI), Zachary Lipton (Carnegie Mellon University)

Modern deep learning has relied on large labeled datasets for training. However, such datasets are not easily available in all domains, and are expensive/difficult to collect. I will introduce approaches that integrate data collection and aggregation with model training through active learning, partial feedback and crowdsourcing methods. Another facet includes sample efficient training algorithms through the use of synthetic data, generative models and semi-supervised learning. Additionally, tensor algebraic algorithms that efficiently encode multiple modalities and higher order dependencies. These techniques can drastically reduce data requirements in a variety of domains. Slides available here (Part 1) and here (Part 2).

Recent Progress in the Theory of Deep Learning

Tengyu Ma (Facebook and Stanford University)

A deeper understanding of the principles of deep learning can consolidate and boost its already-spectacular empirical success. In the first part of the tutorial, we will discuss the core ML issues, such as optimization, generalization, and expressivity, and their rich interactions, in the contexts of supervised learning with (deep) non-linear models. The second part will cover recent theoretical progress for other topics such as representation learning (embeddings), generative adversarial networks, and, if time allows, deep reinforcement learning.

Bayesian Approaches for Blackbox Optimization

Matt Hoffman (DeepMind)

This tutorial will give a high-level overview of Bayesian methods for black-box, global optimization. Bayesian optimization is a popular and successful set of techniques for the optimization of expensive, black-box functions. These techniques address the problem of finding the maximizer of a nonlinear function which is generally non-convex, multi-modal, and whose derivatives are often unavailable. Further, evaluations of the objective function are often only available via noisy observations and the process that performs this evaluation is often computationally or economically expensive. To address these challenges, Bayesian optimization devotes additional effort to modelling the unknown function during optimization in order to minimize the number of evaluations needed. The design of methods for Bayesian optimization also involves a great number of choices that are often implicit in the overall algorithm design. This talk will explicitly discuss these choices, including the selection of the acquisition function, kernel, and hyper-priors as well as less-discussed components such as the recommendation and initialization approaches. Finally, this talk will give an overview of modern research in Bayesian optimization, including information-based strategies, automatic kernel design, and approaches to combat myopia, amongst other cutting edge techniques and applications.

Machine Reading

Sebastian Riedel (UCL), Johannes Welbl (UCL), Dirk Weissenborni (German Research Center for Artificial Intelligence)

What does it take for a machine to comprehend natural language text? Contemporary models are approaching human-level Reading Comprehension (RC) ability on several standard tasks. But how do Reading Comprehension methods work, and how well do they work in practice? In this tutorial, we introduce classical Reading Comprehension tasks, bring you up to date with current methods, and show you where their limitations and shortcomings are. We discuss current trends, showcase open problems and offer an overview of various datasets you can work with.











Golden Sponsor



Bronze Sponsor



Startup Sponsor