As is traditional UAI organizes a one-day tutorial session at a nominal
extra fee for conference registrants. The tutorial session can be attended by
those not registered for the main conference at a slightly higher free. See
the
on-line registration page for details and to register.
The tutorial session will occur on July 26th, and will feature the
following tutorials.
Computational Mechanism Design
Satinder Singh,
University of Michigan
|
Mechanism design (MD) turns game theory on its head. It aims to develop
protocols or rules of the game so that a desired outcome happens in a
distributed system of self-interested agents. Computational MD brings
together concepts of game-theoretic equilibria and truth-revelation from
economics with the concepts of computational and communication
tractability from computer science. In this tutorial, I will begin with
a brief review of relevant background concepts from the game theory and
auction literatures, and then spend the bulk of my time on
- foundational results on strategyproof mechanisms,
- the Vickrey-Clarke-Groves (VCG) mechanisms and their properties, and
- recent results in directions of interest to AI folks including
extending mechanism design to sequential problems as well as adaptive
mechanism design.
This tutorial is designed for a computer science audience and should be
accessible to those with little exposure to game theory or economics.
|
|
Satinder Singh is an Associate Professor of Electrical
Engineering and Computer Science at the University of Michigan, Ann
Arbor. Prior to this he was a principal member of the technical staff in
the AI group at AT&T labs, and prior to that he was an Assistant
Professor of Computer Science at the University of Colorado, Boulder. He
has published extensively in the field of reinforcement learning and
more recently has turned to computational game theory to understand
multiagent systems, and to mechanism design to understand the role of
incentives in designing multiagent systems.
|
|
Sensor Networks: Opportunities for UAI
Carlos Guestrin, Carnegie Mellon University
On-Line Tutorial Notes
|
Sensor networks consist of a collection of small low-cost devices that
can sense and actuate in their environment, and communicate with each
other through a wireless network. Recent advances in hardware and
low-level software have made it possible for several real-world
deployments to collect scientific and engineering data in unstructured
environments. The long-term goal of sensor networks is to design systems
that address significantly more complex applications, including fault
detection in large-structures, such as bridges, building automation,
smart environments, and management of disaster
rescue operations. Most of these future applications require the
solution of complex data analysis, learning, and decision making tasks.
Recent advancements in probabilistic AI algorithms and representations
seek to solve such complex tasks, but sensor networks pose unique new
challenges preventing such direct applications of existing methods:
Resources, such as computation, communication and power, are usually
scarce in low-cost wireless sensor networks. Furthermore, communication
is usually unreliable, and nodes are prone to failures. In this
tutorial, we will discuss these challenges, some current solutions, and
mostly opportunities for fundamentally new AI algorithms that address
the real issues arising in sensor networks.
The content is accessible to attendees with working knowledge of
probabilistic methods in AI or in robotics.
|
|
Carlos Guestrin is an assistant professor in the Center for
Automated Learning and Discovery and in the Computer Science Department
at Carnegie Mellon University. Previously, he was a senior researcher at
the Intel Research Lab in Berkeley. Carlos Guestrin received his MSc and
PhD in Computer Science from Stanford University in 2000 and 2003,
respectively, and a Mechatronics Engineer degree from the Polytechnic
School of the University of São Paulo, Brazil, in 1998. His current
research spans the areas of planning, reasoning and learning in
uncertain dynamic environments, focusing on applications in sensor
networks. Carlos Guestrin received the IPSN-2005, VLDB-2004 and
NIPS-2003 best paper awards, the Siebel Scholarship, the Stanford
Centennial Teaching Assistant Award, the Stanford School of Engineering
Fellowship, and the FAPESP and CAPES Research Initiation Fellowships.
|
|
Non-parametric Bayesian methods
Zoubin Ghahramani, Gatsby Computational Neuroscience Unit,
University College London
On-Line Tutorial Notes
|
|
Bayesian methods provide a sound statistical framework for modelling
and decision making. However, most simple parametric models are not
realistic for modelling real-world data. Non-parametric models are much
more flexible and therefore are much more likely to capture our beliefs
about the data. They also often result in much better predictive
performance.
I will give a survey/tutorial of the field of non-parametric Bayesian
statistics from the perspective of machine learning. Topics will
include:
- The need for non-parametric models
- Gaussian processes and their application to classification,
regression, and other prediction problems
- Chinese restaurant processes, different constructions, Pitman-Yor
processes
- Dirichlet processes, Dirichlet process mixtures, Hierarchical
Dirichlet processes and infinite HMMs
- Polya trees
- Dirichlet diffusion trees
- Time permitting, some new work on Indian buffet processes
|
|
Zoubin Ghahramani is a Reader in Machine Learning at the
Gatsby Unit, University College London. He also has an appointment as
Associate Research Professor in the School of Computer Science at
Carnegie Mellon University and holds honorary appointments in Department
of Computer Science and in the Department of Psychology at UCL. His
current research interests include Bayesian approaches to machine
learning, studying how biological systems control movement and integrate
information from different senses, artificial intelligence, statistics,
and bioinformatics.
|
|
Large-Margin Learning of Structured Prediction Models
Ben Taskar, Computer
Science, UC Berkeley
On-Line Tutorial Notes
|
|
The scope of discriminative learning methods has been expanding to
encompass prediction tasks with increasingly complex structure. Much of
this development is based on extending classification techniques using
graphical models to capture sequential, spatial, or relational
structure. However, structured prediction involves a broader range of
models, including recursive grammars and combinatorial models such as
weighted cuts and matchings in graphs used in computational linguistics,
biology and vision. For graphical models, two major approaches to
discriminative
estimation have been explored: (1) maximum conditional likelihood and
(2) maximum margin. I will compare the benefits and limitations of the
two approaches, and show the surprising fact that for a large class of
structured models, the likelihood-based approach is intractable, but the
margin-based approach yields tractable convex problems. I will discuss
the unique computational challenges that arise in extending large margin
methods to structured models and concentrate on novel solution
algorithms using tools from convex and combinatorial optimization.
This tutorial assumes basic familiarity with supervised learning,
graphical models and convex optimization.
|
|
Ben Taskar received his Ph.D. in Computer Science from
Stanford University working with Daphne Koller. He is currently a
postdoctoral fellow with Michael Jordan at the Computer Science
Division, University of California at Berkeley. One of his interests is
structured model estimation in machine learning, especially in
computational linguistics, computer vision and computational biology.
Last year, he co-organized a NIPS workshop on this emerging topic. His
work on structured prediction has received best paper awards at NIPS and
EMNLP conferences.
|