The Sixteenth Conference on Uncertainty in Artificial, UAI-2000, will be held from June 30 - July 3, 2000, at Stanford University. We will be offering, on June 30, a full-day course on Uncertainty, consisting of four tutorials on state-of-the-art methods for various aspects of uncertainty management:

- Possibility Theory: A Tool for Handling Incomplete Information and Preference, Didier Dubois, IRIT
- Fundamental Principles of Probabilistic Network Representation, Ross Shachter, Stanford Univeristy
- Learning Bayesian Networks From Data, David Heckerman, Microsoft Research
- An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, John Shawe-Taylor, Nello Cristianini, University of London (joint tutorial with COLT-2000

Didier Dubois, IRIT

9:00AM-10:30AM

This talk is an introduction to possibility theory, a theory of uncertainty closely related to fuzzy set theory, but similar to probability theory although following different operating rules. Contrary to probability, a possibility measure is maxitive and not self-dual. It is more devoted to the explicit representation of incomplete information than to random phenomena. Like probability, it possesses specific notions of conditioning and independence.

The possibilistic representation makes sense either on a numerical or an ordinal scale. In the first (numerical) case, there are several possible clear connections between possibility and probability theory, in terms of upper probabilities, belief functions,confidence intervals, likelihood functions and infinitesimal probabilities. In contrast, the ordinal representations are closely related to nonmonotonic reasoning about the normal course of things.

When it comes to decision-making, both utility and uncertainty can be
modelled by means of possibility distributions having strikingly different
semantics, yet being very similar mathematical objects. Possibilistic
decision-making leads to criteria for decision under uncertainty that
differ from expected utility. These criteria have been axiomatized in the
style of Savage, as capable of representing particular rankings of acts
that account for either pessimistic or optimistic behavior of an agent
faced with one-shot decisions.

Ross Shachter, Stanford Univeristy

10:50AM-12:20PM

This talk will present some foundations of probabilistic models of
uncertainty and decision making, and an introduction to the representation
of those models with Bayesian networks and influence diagrams. The talk
will focus on the structural representation of irrelevance in simple belief
networks and influence diagrams. It will cover some of the basic
assumptions of probabilistic models and the context of decision analysis.

David Heckerman, Microsoft Research

1:40PM-3:10PM

For two decades, Bayesian networks have been used in intelligent
systems with a fair amount of success. With few exceptions, system
builders have constructed Bayesian networks by directly encoding the
knowledge of experts. Data sets have rarely been used in the
construction process. One drawback of this knowledge-based approach
is that knowledge elicitation can be expensive. More recently,
however, researchers have developed techniques for constructing
Bayesian networks (both parameters and structure) from a combination
of expert knowledge and data. These techniques can significantly
reduce the cost of building an intelligent system in domains where
data is readily available. In addition, these techniques can be used
to identify causal relationships from non-experimental data--an
important breakthrough for science. I will describe some of these
techniques, concentrating on methods borrowed from Bayesian
Statistics. I will discuss methods for determining the goodness of a
model, search methods for identifying good models, and real-world
applications.

John Shawe-Taylor, Nello Cristianini, University of London

3:30PM-5:30PM

Support Vector Machines are a powerful learning system based on the application of linear classifiers in a kernel-defined high dimensional feature space. They demonstrate state-of-the-art performance on most benchmarks and applications. Their introduction has also led to an explosion of research into both generalisation analysis and kernel design.

The success of SVMs is based on two key features, firstly that they can be seen as a replacement for neural networks, but without the computational problems of local minima, and secondly that they can be shown to directly optimise a well-founded statistical bound on their generalisation performance.

The tutorial will give an introduction to the four critical ingredients of SVMs:

- dual representation linear learning,
- kernel-induced feature spaces,
- large margin capacity control,
- optimisation theory analysis and algorithms.

The tutorial will explain how SVMs make use of all these components to create a state-of-the-art learning system. Recent developments will be introduced in context with pointers to further reading and research.

The tutorial will be accessible to researchers from all three conferences, COLT, ICML, UAI.