UAI 2018 - Invited Speakers

Stuart Russell

UC Berkeley

Uncertainty in objectives

It is reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? While some in the mainstream AI community dismiss the issue, I will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help. This uncertainty causes machine and human behavior to be inextricably (and game-theoretically) linked, while opening up many new avenues for Research.

Biographical details

Stuart Russell is a Professor of Computer Science at Berkeley and an Honorary Fellow of Wadham College, Oxford. He is a fellow of AAAI, ACM, and AAAS, winner of the IJCAI Computers and Thought Award, holder of the Chaire Blaise Pascal in Paris from 2012 to 2014, and author (with Peter Norvig) of "Artificial Intelligence: A Modern Approach", the standard text in the field. His current research interests include first-order probabilistic languages, global seismic monitoring for the Comprehensive Nuclear-Test-Ban Treaty, and the long-term implications of AI for humanity.

Michael C. Frank

Stanford University

Bigger data about smaller people: Studying children’s language learning at scale

How do children acquire language? Decades of work have provided a roadmap of principles and mechanisms for early language learning as attested by small-scale laboratory tasks. But there is not yet a convincing empirical synthesis of this work that addresses both the systematicity and ubiquity of language learning and the variability of learning trajectories across children. In this talk I will describe some initial steps towards such a synthesis. This research integrates high-density data from individual children learning a single language and summary data from tens of thousands of children learning more than a dozen languages. Taken together, the data support a hybrid picture in which children slowly accumulate knowledge in rich social contexts but also show evidence for surprisingly fast grammatical abstractions. Further, this work illustrates our approach to creating interfaces that allow for easy interactive and programmatic access to large developmental datasets.

Biographical details

Michael C. Frank is Associate Professor of Psychology at Stanford University. He earned his BS from Stanford University in Symbolic Systems in 2005 and his PhD from MIT in Brain and Cognitive Sciences in 2010. He studies both adults' language use and children's language learning and how both of these interact with social cognition. His work uses behavioral experiments, computational tools, and novel measurement methods including large-scale web-based studies, eye-tracking, and head-mounted cameras. He has been recognized as a "rising star" by the Association for Psychological Science. His dissertation received the Glushko Prize from the Cognitive Science Society, and he is recipient of the FABBS Early Career Impact award and the Klaus W. Jacobs Advanced Research Fellowship. He has served as Associate Editor for the journal Cognition, member and chair of the Governing Board of the Cognitive Science Society, and was a founding Executive Committee member of the Society for the Improvement of Psychological Science.

Joelle Pineau

McGill University and Facebook

Reproducibility, Reusability, and Robustness in Deep Reinforcement Learning

In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning. However reproducing results for state-of-the-art deep RL methods is seldom straightforward. High variance of some methods can make learning particularly difficult when environments or rewards are strongly stochastic. Furthermore, results can be brittle to even minor perturbations in the domain or experimental procedure. In this talk, I will discuss challenges that arise in experimental techniques and reporting procedures in deep RL, and will suggest methods and guidelines to make future results more reproducible, reusable and robust. I will also report on findings from the ICLR 2018 reproducibility challenge.

Biographical details

Joelle Pineau is an Associate Professor and William Dawson Scholar at McGill University where she co-directs the Reasoning and Learning Lab. She also leads the Facebook AI Research lab in Montreal, Canada. She holds a BASc in Engineering from the University of Waterloo, and an MSc and PhD in Robotics from Carnegie Mellon University. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a recipient of NSERC's E.W.R. Steacie Memorial Fellowship (2018), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Raquel Urtasun

University of Toronto and Uber

Title TBA


Biographical details

Raquel Urtasun is an Associate Professor at the University of Toronto. She holds a Canada Research Chair in Machine Learning and Computer Vision in the Department of Computer Science. Urtasun uses artificial intelligence, particularly deep learning, to make vehicles and other machines perceive the world more accurately and efficiently.In May 2017 Uber hired her to lead a Toronto-based research team for the company's self-driving car program.