Invited Speakers

 

Inferring 3D People from 2D Images

Michael J. Black, Brown University, Dept. of Computer Science 

The detection and tracking of people in video is challenging due the variability of human appearance, the high dimensionality of articulated body models, self occlusion, the loss of 3D information in the projection to 2D images, and the complexity of human motion. This talk will overview the problem and will explore in detail one solution which is representative of the current state of the art.

To tackle the problem we pose it as one of Bayesian inference and exploit a variety of learning and inference techniques. First we model the articulated human body as a kinematic tree. Then, using training data, we learn the likelihood of observing various image measurements conditioned on the pose and motion of the body model. Due to the ambiguities inherent in such measurements and the high dimensionality of the articulated model, we wish to constrain the motions of the body to those that are valid. To that end we learn a prior probability distribution over possible human motions using 3D motion-capture data. This prior term exploits ideas from texture synthesis to construct implicit probabilistic models of human motion that replace the problem of representation with that of efficient search. Since the resulting posterior probability over human poses and motions is non-Gaussian and multi-modal, we exploit particle filtering for our Bayesian tracking. By combining multiple image cues, by using learned likelihood models, and by using learned prior models of motion, we demonstrate the tracking of people in monocular image sequences with cluttered scenes and a moving camera.

Finally, this talk with examine the problems with the approach and formulations like it. I will provide a sketch for how new probabilistic inference techniques might get us to our goal of fully automatic person detection and tracking.

 

Strategic Reasoning and Graphical Models

Michael Kearns, University of Pennsylvania

 The last several years have seen the development of graphical models for large-population game theory. In the same way that models such as Bayesian networks seek to compactly represent structure and restrictions on the probabilistic interactions between a large number of random variables, these recent models capture structure in the game-theoretic or economic interaction between a large number of individuals or organizations. Algorithms have been developed for a number of basic computations, including Nash and correlated equilibria. While differing in the details, these algorithms are in many cases inspired by analogous computations in Bayesian and Markov networks. This talk will survey these developments and discuss connections with related topics, such as social network theory and macroeconmics.

 

What's New in Statistical Machine Translation

Kevin Knight USC Information Sciences Institute

Automatic translation from one human language to another using computers, better known as machine translation (MT), is a long-standing goal of computer science. Accurate translation requires a great deal of knowledge about the usage and meaning of words, the structure of phrases, the meaning of sentences, and which real-life situations are plausible. For general-purpose translation, the amount of required knowledge is staggering, and it is not clear how to prioritize knowledge acquisition efforts. Recently, there has been a fair amount of research into extracting translation-relevant knowledge automatically from very large bilingual texts. For some language pairs, the size of these texts already reaches 200 million words. Over the past years, several statistical MT projects have appeared in North America, Europe, and Asia, and the literature is growing substantially. This talk will cover the basic algorithms developed in this field, plus some of the latest empirical results.

 

Some Measures of Incoherence: How not to gamble if you must

Teddy Seidenfeld, Carnegie Mellon University

The degree of incoherence, when (one-sided) previsions are not made in accord with coherent (lower and upper) probabilities, is measured by a rate at which an incoherent bookie can be made into a sure loser. We consider each bet from three points of view: that of the gambler, that of the bookie, and a neutral viewpoint. We normalize each bet according to a point of view. The sure losses for incoherent previsions are standardized by a normalization, which leads to a rate of incoherence. Criteria for a normalization are offered and we discuss the range in rates of incoherence that result. We give examples of the measurement of incoherence of some classical statistical procedures. Also, we illustrate how an incoherent bookie might reason about pending gambles from within her/his state of incoherence in order not to increase the rate of incoherence

 

A Bayesian history of the Royal Statistical Society

Adrian F. M. Smith, University of London

I will give a not too serious, highly prejudiced, historical review of the way ideas about the treatment of uncertainty and its fields of application evolved in the UK.