UAI 2017 - Invited Speakers

Amir Globerson

School of Computer Science,
Tel Aviv University and Google.

Learning and inference with expectations

Probabilistic models can be characterized using expectations of functions of random variables. This can in turn be used to devise methods for learning representations, performing probabilistic inference, and discriminative learning. The talk will survey such methods, highlighting both theoretical results and their practical implications.

Biographical details

Prof. Globerson received his BSc in computer science and physics in 1997 from the Hebrew University, and his PhD in computational neuroscience from the Hebrew University in 2006. After his PhD, he was a postdoctoral fellow at the University of Toronto and a Rothschild postdoctoral fellow at MIT. He joined the Hebrew University school of computer science in 2008, and moved to the Tel Aviv University School of Computer Science in 2015. Prof. Globerson’s research interests include machine learning, probabilistic inference, convex optimization, neural computation and natural language processing. He is an associate editor for the Journal of Machine Learning Research, and the Associate Editor in Chief for the IEEE Transactions on Pattern Analysis and Machine Intelligence. His work has received several prizes including five paper awards (two at NIPS, two at UAI, and one at ICML), as well as one runner up for best paper at ICML. His research has been supported by several grants and awards from ISF, BSF, GIF, Intel, HP, and Google. In 2015 he was a visiting scientist at Google Mountain View and since 2017 he is also a Research Scientist at Google in Tel Aviv.

Katherine A. Heller

Department of Statistical Science,
Duke University.

Machine Learning for Healthcare Data

We will discuss multiple ways in which healthcare data is acquired and machine learning methods are currently being introduced into clinical settings. This will include: 1) Modeling disease trends and other predictions, including joint predictions of multiple conditions, from electronic health record (EHR) data using Gaussian processes. 2) Predicting surgical complications and transfer learning methods for combining databases 3) Using mobile apps and integrated sensors for improving the granularity of recorded health data for chronic conditions and 4) The combination of mobile app and social network information in order to predict the spread of contagious disease. Current work in these areas will be presented and the future of machine learning contributions to the field will be discussed.

Biographical details

Katherine is an Assistant Professor at Duke University, in the Department of Statistical Science and at the Center for Cognitive Neuroscience. Prior to joining Duke she was an NSF Postdoctoral Fellow, in the Computational Cognitive Science group at MIT, and an EPSRC Postdoctoral Fellow at the University of Cambridge. Her Ph.D. is from the Gatsby Unit, where her advisor was Zoubin Ghahramani. Katherine's research interests lie in the fields of machine learning and Bayesian statistics. Specifically, she develops new methods and models to discover latent structure in data, including cluster structure, using Bayesian nonparametrics, hierarchical Bayes, techniques for Bayesian model comparison, and other Bayesian statistical methods. She applies these methods to problems in the brain and cognitive sciences, where she strives to model human behavior, including human categorization and human social interactions.

Leslie Pack Kaelbling

Department of Electrical Engineering and Computer Science,
Massachusetts Institute of Technology.

Intelligent Robots in an Uncertain World

The fields of AI and robotics have made great improvements in many individual subfields, including in motion planning, symbolic planning, reasoning under uncertainty, perception, and learning. Our goal is to develop an integrated approach to solving very large problems that are hopelessly intractable to solve optimally. We make a number of approximations during planning, including serializing subtasks, factoring distributions, and determinizing stochastic dynamics, but regain robustness and effectiveness through a continuous state-estimation and replanning process. I will describe our application of these ideas to an end-to-end mobile manipulation system, as well as ideas for current and future work on improving correctness and efficiency through learning.

Biographical details

Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor-in-chief of the Journal of Machine Learning Research. Her goal is to make intelligent robots, although she is not, herself, a robot.

Christopher Ré

Department of Computer Science,
Stanford University.

Snorkel: Beyond hand-labeled data

This talk describes Snorkel, whose goal is to make routine machine learning tasks dramatically easier. Snorkel focuses on a key bottleneck in the development of machine learning systems: the lack of large training datasets. In Snorkel, a user implicitly defines large training sets by writing simple programs that label data, instead of tediously hand-labeling individual data items. Snorkel then combines these programs using a generative model to produce probabilistic training labels. This talk will cover the underlying theory, including methods to learn both the parameters and structure of generative models without labeled data, and new convergence guarantees for Gibbs sampling and accelerated non-convex optimization. Additionally we'll describe our preliminary evidence that the Snorkel approach may allow a broader set of users to train machine learning models more easily than previous approaches. Snorkel and its predecessor DeepDive are in daily use by scientists in areas including genomics and drug repurposing, by a number of companies involved in various forms of search, and by law enforcement in the fight against human trafficking. DeepDive and Snorkel are open source on github and available from DeepDive.Stanford.Edu and Technical blog posts are available at

Biographical details

Christopher (Chris) Ré is an assistant professor in the Department of Computer Science at Stanford University in the InfoLab who is affiliated with the Statistical Machine Learning Group, Pervasive Parallelism Lab, and Stanford AI Lab. His work's goal is to enable users and developers to build applications that more deeply understand and exploit data. His contributions span database theory, database systems, and machine learning, and his work has won best paper at a premier venue in each area, respectively, at PODS 2012, SIGMOD 2014, and ICML 2016. In addition, work from his group has been incorporated into major scientific and humanitarian efforts, including the IceCube neutrino detector, PaleoDeepDive and MEMEX in the fight against human trafficking, and into commercial products from major web and enterprise companies. He received a SIGMOD Dissertation Award in 2010, an NSF CAREER Award in 2011, an Alfred P. Sloan Fellowship in 2013, a Moore Data Driven Investigator Award in 2014, the VLDB early Career Award in 2015, the MacArthur Foundation Fellowship in 2015, and an Okawa Research Grant in 2016.

Terry Speed

Bioinformatics Division,
Walter and Eliza Hall Institute of Medical Research.

Two current analysis challenges: single cell omics and nanopore long-read sequence data.

Two of the most exciting recent developments in my field are single cell omics assays, where omics here currently encompasses genomics, transcriptomics and epigenomics, and the long-read DNA sequencing capability introduced by Oxford Nanopore Technologies. Each leads to lots of interesting data permitting biomedical researchers to address questions previously beyond their reach. Each offers numerous opportunities and challenges to those who wish to, or must dive into the data to help the researchers answer these new questions. And each seems likely to drive theoretical developments in areas of interest to participants in UAI 2017. I’ve been lucky enough to be associated with research projects in these two areas. Building on my experience with them, I’ll discuss some of the opportunities, challenges and areas for theoretical development that I see.

Biographical details

Terry Speed completed a BSc (Hons) in mathematics and statistics at the University of Melbourne and a PhD in mathematics and Dip Ed at Monash University. He has held appointments at the University of Sheffield, U.K., the University of Western Australia in Perth, and the University of California at Berkeley, and with the CSIRO in Canberra. In 1997 he took up an appointment with the Walter & Eliza Hall Institute of Medical Research, where he is now an Honorary Fellow and lab head in the Bioinformatics Division. His research interests lie in the application of statistics and bioinformatics to genetics and genomics, and related fields such as proteomics, metabolomics and epigenomics, with a focus on cancer and epigenetics.

Golden Sponsor

Golden Sponsor

Golden Sponsor

Bronze Sponsor

Bronze Sponsor

Training session Sponsorship

Startup Sponsor

Media Sponsor