09:00 - 10:30, Thursday, July 19
Brian Milch
MIT Computer Science and
Artificial Intelligence Laboratory
Inference on Relational Models Using Markov Chain Monte Carlo
Abstract: Many practical problems involve making inferences about
the set of related objects that underlie some observations, such as the
aircraft that generated a set of radar blips, or the papers and authors
referred to by a set of citations. These problems can be described by
relational probability models that represent uncertainty about what objects
exist and what relations hold among the objects and observations. However,
inference in such models can be extremely difficult. This tutorial will
cover approximate inference methods based on Markov chain Monte Carlo
(MCMC). We will discuss approaches that go beyond the well-known Gibbs
sampling algorithm, including split-merge proposal distributions and
domain-specific proposers that guide the sampling process toward
high-probability hypotheses. As a case study, we will describe an
application of these methods to building a bibliographic database from a
collection of citations.
Speaker Bio: Brian Milch is a postdoctoral researcher working with
Leslie Kaelbling in the Computer Science and Artificial Intelligence
Laboratory at MIT. He received his PhD in 2006 from UC Berkeley, where he
worked with Stuart Russell on relational probabilistic modeling and
inference. Prior to graduate school, he spent a year as a research
engineer at Google. He did his undergraduate work at Stanford University,
graduating in 2000 with a degree in Symbolic Systems. He is the recipient
of an NSF Graduate Research Fellowship and a Siebel Scholarship.
11:00 - 12:30, Thursday, July 19
Ramin Zabih
Department of Computer
Science Cornell University
Graph Cut Algorithms for Bayesian Inference with Applications to
Vision and Imaging
Abstract: Many problems in a variety of fields can be naturally
formulated in terms of MAP estimation of a Markov Random Field. However,
the resulting optimization task is extremely difficult, as it requires
minimizing a highly non-convex function in a space with many thousands of
dimensions. In this tutorial we will describe some recent methods that
solve these problems using tools from discrete optimization. By computing
the minimum cut on an appropriately constructed graph, some of these
optimization problems can be solved exactly, while for others an
approximate solution with interesting guarantees can be obtained. In
practice these methods give very strong results, often coming within 1% of
the global optimum on a range of recent benchmarks. Example problems will
be drawn from computer vision (stereo), graphics (interactive photomontage)
and medical imaging (MRI reconstruction).
Speaker Bio: Ramin Zabih is a Professor of Computer Science at
Cornell University, where he has taught since 1994. He received his
undergraduate degrees in Computer Science and Math from MIT, and a PhD in
Computer Science from Stanford. His research interests are in discrete
optimization techniques and their application to computer vision and
medical imaging. Two of his papers (co-authored with Vladimir Kolmogorov)
received Best Paper awards at the 2002 European Conference on Computer
Vision. He is program co-chair of the 2007 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR).
14:00 - 15:30, Thursday, July 19
Peter Flach
Department of Computer
Science, University of Bristol
ROC Analysis for Ranking and Probability
Estimation
Abstract:
Probabilistic classifiers need to exhibit three characteristics: the
probabilistic scores should result in good classification accuracy by
setting an appropriate threshold; sorting examples on decreasing scores
should result in good ranking performance, i.e., most positives should come
before most negatives; and the scores should be have a meaningful
probabilistic interpretation, e.g. as estimates of the class
posterior. While these three characteristics are clearly related, they are
not perfectly correlated: for instance, it is well-known that naive Bayes
is generally a good ranker but a poor probability estimator. In this
tutorial I will show how Receiver Operating Characteristics (ROC) Analysis
can help to understand the relationship and differences between
classification, ranking and probability estimation. Originating from signal
detection theory as a model of how well a receiver is able to detect a
signal in the presence of noise, the key feature of ROC analysis is the
distinction between true positive rate and false positive rate as two
separate performance measures. I will show how to use ROC analysis to set
classification thresholds, to analyse and improve rankers, and to calibrate
probability estimates.
Speaker Bio: Peter Flach is Professor of Artificial Intelligence in
the Computer Science Department of the University of Bristol since August
2003. His main research interests are in machine learning, particularly
learning from highly structured data, and the applications of ROC analysis
in machine learning. His recent published work includes papers on
first-order rule discovery, feature construction in inductive logic
programming, subgroup discovery, Bayesian classification of structured
data, kernel methods for structured data, and higher-order Bayesian
networks that combine higher-order logic and Bayesian inference. He was
associate editor of Machine Learning from 2001–2005, and is on the
editorial boards of Machine Learning, Journal of Machine Learning Research,
Journal of Artificial Intelligence Research, Artificial Intelligence
Communications, and Theory and Practice of Logic Programming. He is invited
speaker at the 2007 European Conference on Machine Learning.
16:00 - 17:30, Thursday, July 19
Craig Boutilier
Department of Computer Science,
University of Toronto
Computational Approaches to Preference Elicitation and
Preference Assessment
Abstract: A critical component of any decision support methodology is the
means by which a decision maker's preference or utility function
is structured, elicited, represented, and reasoned with. Unlike
decision dynamics, which are often identical for different
users, preferences vary widely from person to person (or organization
to organization). Assessing user preferences can be, unfortunately, a
difficult and time-consuming task. As automated decision support
software becomes increasingly commonplace, techniques for addressing the
"preference bottleneck" in a computationally and data efficient manner
have taken on tremendous importance in artificial intelligence, economics,
marketing, and operations research.
This tutorial will focus on recent computational approaches to the
problem of automated preference elicitation and assessment. Topics
to be addressed include: utility representations, models for utility
function uncertainty, decision criteria for decision making with
imprecise utility information, strategies for active preference elicitation
(or querying), and learning techniques for utility assessment given
passive observations. We will draw connections between recent AI (and other
computer science) techniques and methods from operations, economics, and
conjoint analysis.
Speaker Bio:Craig Boutilier received his Ph.D. in Computer Science (1992) from the
University of Toronto, Canada. He is Professor and Chair of the
Department of Computer Science at the University of Toronto. He was
previously an Associate Professor at the University of British Columbia,
a consulting professor at Stanford University, and has served on the
Technical Advisory Board of CombineNet, Inc. since 2001.
Dr. Boutilier's research interests span a wide range of topics, with a
focus on decision making under uncertainty, including preference elicitation,
mechanism design, game theory, Markov decision processes, and reinforcement
learning. He is a Fellow of the American Association of Artificial
Intelligence (AAAI) and the recipient of the Isaac Walton Killam Research
Fellowship, an IBM Faculty Award and the Killam Teaching Award. He is
Program Chair for the International Joint Conference on AI (IJCAI-09).