UAI 2019 - Invited Speakers
Anytime Probabilistic Reasoning
Reasoning over probabilistic graphical models typically involves answering inference queries, such as computing the most likely configuration (maximum a posteriori or MAP) or evaluating the marginals or the normalizing constant of a distribution (the partition function). A task called marginal MAP generalizes these two by maximizing over a subset of variables while marginalizing over the rest and is also highly instrumental for sequential decision making under uncertainty.
All such queries are known to be intractable in general, leading to the development of many approximate schemes, the major categories of which are variational methods, search algorithms, and Monte Carlo sampling. The key is to leverage ideas and techniques from the three inference paradigms, and integrating them to provide hybrid solutions that inherit their respective strengths.
In this talk I will review the main algorithmic principles for probabilistic reasoning. The emerging solvers allow for flexible trading-off memory for time and time for accuracy and aim for anytime behavior that generates not only an approximation that improves with time, but also confidence bounds which become tighter with more time. Our hybrid schemes produced solvers that won competitions, some are integrated into probabilistic languages (Figaro and Markov Logic) and also into software applications such as Superlink-Online for Linkage analysis.
Rina Dechter’s research centers on computational aspects of automated reasoning and knowledge representation including search, constraint processing, and probabilistic reasoning. She is a Chancellor's Professor of Computer Science at University of California, Irvine. She holds a Ph.D. from UCLA, an M.S. degree in applied mathematics from the Weizmann Institute, and a B.S. in mathematics and statistics from the Hebrew University in Jerusalem. She is the author of Constraint Processing published by Morgan Kaufmann (2003), and of Reasoning with Probabilistic and Deterministic Graphical Models: Exact Algorithms published by Morgan and Claypool Publishers (2013, second ed. 2019). She has co-authored close to 200 research papers and has served on the editorial boards of: Artificial Intelligence, the Constraint Journal, Journal of Artificial Intelligence Research (JAIR), and Journal of Machine Learning Research (JMLR). She is a Fellow of the American Association of Artificial Intelligence since 1994, was a Radcliffe Fellow during 2005–2006, received the 2007 Association of Constraint Programming (ACP) Research Excellence Award, and became an ACM Fellow in 2013. She served as a Co-Editor-in-Chief of Artificial Intelligence from 2011 to 2018 and is the conference chair-elect of IJCAI-2022.
Towards Efficient Effective Reinforcement Learning Algorithms That Interact With People
There is increasing excitement about reinforcement learning -- a subarea of machine learning for enabling an agent to learn to make good decisions. Yet numerous questions and challenges remain for reinforcement learning to help support progress in applications that involve interacting with people, like education, consumer marketing and healthcare. I will discuss our work on some of the technical challenges that arise in this pursuit, including minimax PAC and regret bounds for reinforcement learning in tabular environments, and counterfactual reasoning from prior data.
Emma Brunskill is an assistant professor in the Computer Science Department at Stanford University where she leads the AI for Human Impact (@ai4hi) group. Her work focuses on reinforcement learning in high stakes scenarios -- how can an agent learn from experience to make good decisions when experience is costly or risky, such as in educational software, healthcare decision making, robotics or people-facing applications. She was previously on faculty at Carnegie Mellon University. She is the recipient of a multiple early faculty career awards (National Science Foundation, Office of Naval Research, Microsoft Research) and her group has received several best research paper nominations (CHI, EDMx2) and awards (UAI, RLDM).
A Probabilistic Perspective on Meta and Reinforcement Learning
Probabilistic modelling provides a rich language to express many important concepts in learning and inference, including prior knowledge, structure, hierarchy, and uncertainty. In this talk, I will describe how the perspective afforded by probabilistic modelling, as well as the tools developed by the community over the years, can be very useful in tackling interesting challenges in data-efficiency in meta and reinforcement learning. I will illustrate these with recent examples from my research, including the idea of meta learning as probabilistic inference, data-driven surrogate models in sequential decision making, and the use of hierarchical Bayes for transfer in multi-task reinforcement learning.
This is joint work with other researchers at Deepmind, including Ali Eslami, Alex Galashov, Marta Garnelo, Leonard Hasenclever, Jan Humplik, Hyunjik Kim, Hyeonwoo Noh, Pedro Ortega, Razvan Pascanu, Danilo Rezende, Jonathan Schwarz, Dhruva Tirumala and Jane Wang, and particularly with Nicolas Heess.
Yee Whye Teh is a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at DeepMind working on AI research. He obtained his Ph.D. at the University of Toronto (under Prof. Geoffrey E. Hinton), and did postdoctoral work at the University of California at Berkeley (under Prof. Michael I. Jordan) and National University of Singapore (as Lee Kuan Yew Postdoctoral Fellow). Before Oxford and DeepMind, he was a Lecturer then a Reader at the Gatsby Computational Neuroscience Unit, UCL. He was programme co-chair (with Prof. Michael Titterington) of the International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, programme co-chair (with Prof Doina Precup) of the International Conference on Machine Learning (ICML) 2017, and am/was an associate editor for Bayesian Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Machine Learning Journal, Statistical Sciences, Journal of the Royal Statistical Society Series B and Journal of Machine Learning Research. He has been area chair for NIPS, ICML and AISTATS on multiple occasions. His research interests span across machine learning and computational statistics, including probabilistic methods, Bayesian nonparametrics and deep learning. He develops novel models as well as efficient algorithms for inference and learning.
Negative Dependence and Machine Learning
Probability distributions with strong notions of negative dependence arise in various forms in machine learning. Examples include diversity-inducing probabilistic models, interpretability, exploration and active learning, and randomized algorithms. Some of the best known distributions with negative dependence are determinantal point processes, but they are not the only ones. While, perhaps surprisingly, being more delicate than its positive counterpart, negative dependence enjoys rich mathematical connections and properties that offer a promising toolbox for machine learning.
In this talk, I will survey parts of this rich mathematical toolbox and examples of implications. We will see recently important notions of negative dependence, and connections to the geometry of polynomials, log concavity and submodularity. These have manifold algorithmic implications of great importance for machine learning, e.g., efficient sampling and approximation algorithms. Together, these results enable a variety of applications, of which the talk will summarize a selection.
Stefanie Jegelka is an X-Window Consortium Career Development Associate Professor in the Department of EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL), and an affiliate of IDSS and ORC at MIT. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award and a Best Paper Award at ICML. She has been an area chair for NeurIPS, ICML, UAI and AISTATS, and given multiple tutorials. Her research interests span multiple topics around the theory and practice of algorithmic machine learning, including discrete and continuous optimization, discrete probability, and modeling, theory and algorithms for learning with structured data.
Safety Challenges with Black-Box Predictors and Novel Learning Approaches for Failure Proofing
Abstract: TBD
Suchi Saria is the John C. Malone Assistant Professor at Johns Hopkins University where she directs the Machine Learning and Healthcare Lab. Her work with the lab enables new classes of diagnostic and treatment planning tools for healthcare—tools that use statistical machine learning techniques to tease out subtle information from “messy” observational datasets, and provide reliable inferences for individualizing care decisions.
Saria’s methodological work spans Bayesian and probabilistic approaches for addressing challenges associated with inference and prediction in complex, real-world temporal systems, with a focus in reliable ML, methods for counterfactual reasoning, and Bayesian nonparametrics for tackling sample heterogeneity and time-series data.
Her work has received recognition in numerous forms including best paper awards at machine learning, informatics, and medical venues, a Rambus Fellowship (2004-2010), an NSF Computing Innovation Fellowship (2011), selection by IEEE Intelligent Systems to Artificial Intelligence’s “10 to Watch” (2015), the DARPA Young Faculty Award (2016), MIT Technology Review’s ‘35 Innovators under 35’ (2017), the Sloan Research Fellowship in CS (2018), the World Economic Forum Young Global Leader (2018), and the National Academies of Medicine (NAM) Emerging Leader in Health and Medicine (2018). In 2017, her work was among four research contributions presented by Dr. France Córdova, Director of the National Science Foundation to Congress’ Commerce, Justice Science Appropriations Committee. Saria received her PhD from Stanford University working with Prof. Daphne Koller.