Invited Speakers

Sander Greenland

Sander Greenland, University of California, Los Angeles

Marginal science: Facing the task of inference about an unobserved margin from an insufficient set of observed margins

Abstract: Inference and decision-making in the health sciences must cope with overwhelming detail of unknown provenance. Conventional statistics has emerged as a failed attempt to automate the process. One reason for this failure is clear: Conventional statistics operates only on models with full and precise identification of target parameters, apart from random error. In typical epidemiologic settings, however, all that is observed are fragmented margins of a high-dimensional distribution. These observations are usually far from sufficient to reconstruct or identify the target parameters, for the latter are functions of unobserved margins. Conventional statistics copes with this problem by forcing identifying constraints or assumptions, most with no justification, and thus leads to gross understatement of uncertainty. In response, direct expert judgment is routinely used to supplement the uncertainties estimated by conventional statistics. In some areas (e.g., nutritional epidemiology) this informal syncretism has had an unimpressive track record, to say the least.

Rejecting the conventional approach, how and what can we learn from the observed margins? The nonidentified-Bayesian approach can proceed using soft constraints that reflect contextual information and its uncertainties, but the specification requirements exceed the resources of most practitioners. Automation of the process might then seem warranted. The difficulty of automating prior specification is notorious, however, and in high dimensions any specification process can outstrip all resources. This talk will detail the issues, and pose questions of whether and how the experiences of the AI and automated-learning communities can be brought to bear on these problems.

Stuart M. Shieber

Stuart M. Shieber, Harvard University

Does the Turing test demonstrate intelligence or not?

Abstract: The Turing Test has served as an underlying inspiration for research in artificial intelligence since the field came into being in the 1950s immediately following Turing's pronouncements on the possibility of machine intelligence. But philosophers seem to agree that the Turing Test can't actually serve as a test for intelligence (though they disagree on why). After reviewing the long history of verbal indistinguishability tests for intelligence, I will argue that modern developments in theoretical computer science and cosmology can shed light on the vexed question of whether or not the Turing Test demonstrates intelligence.

About the speaker: Stuart M. Shieber is James O. Welch, Jr. and Virginia B. Welch Professor of Computer Science in the Division of Engineering and Applied Sciences at Harvard University. His primary research area is computational linguistics and natural-langauge processing, but he has worked in several other areas of computer science as well, including computer-human interaction, automated graphic design, combinatorial optimization, and "computational philosophy". He has been named a Presidential Faculty Fellow and a Fellow of AAAI. He is also the founding faculty director of Harvard's Center for Research on Computation and Society.

Matthew Stephens

Matthew Stephens, University of Washington

Statistical models for population genetic data

Abstract: With the increasing scale on which data on population genetic variation are being collected, comes an increasing need for statistical models that are both computationally tractable for large data sets, and able to capture the complex patterns of correlation that exist among multiple genetic variants. I will describe some statistical models that have developed for this problem, show examples of their applications (including estimating recombination rates, estimating haplotypic phase, and estimating missing genotypes), and discuss their relative advantages and disadvantages. I will also outline how these models may be helpful in improving the effectiveness of genome-wide association studies, by allowing data from large publically-available databases on human genetic variation (eg data from the HapMap project) to be efficiently combined with study-specific data (genome scan data on a phenotyped study population).

Pascal Van Hentenryck

Pascal Van Hentenryck, Brown University

Anticipatory algorithms for online stochastic combinatorial optimization

Abstract: This talk considers online stochastic combinatorial optimization (OSCO) problems in which online decisions select which requests to serve and how to serve them. OSCO problems arise in many practical applications in networking, reservation systems, and vehicle routing and dispatching. We present a class of anticipatory algorithms for OSCO applications, study their theoretical properties, and demonstrates their performance on a variety of complex problems.

About the speaker: Pascal Van Hentenryck is professor of computer science at Brown University. He has written 5 books, all published by the MIT Press, and developed a number of influential optimization systems, including CHIP, Numerica, OPL, and Comet, many of which being available commercially. Pascal received the 2002 INFORMS ICS prize for research excellence at the intersection of computer science and operations research, a 2004 IBM faculty award, and several best paper awards.