JHN

Building Map

Nonparametric Identified Methods to Handle Nonignorable Missing Data

Time
Speaker
Mauricio Sadinle

There has recently been a lot of interest in developing approaches to handle missing data that go beyond the traditional assumptions of the missing data being missing at random and the nonresponse mechanism being ignorable. Of particular interest are approaches that have the property of being nonparametric identified, because these approaches do not impose parametric restrictions on the observed-data distribution (what we can estimate from the observed data) while allowing estimation under a full-data distribution.

Building
Room
175

Robust Estimation: Optimal Rates, Computation and Adaptation

Time
Speaker
Chao Gao

I will discuss the problem of statistical estimation with contaminated data. In the first part of the talk, I will discuss depth-based approaches that achieve minimax rates in various problems. In general, the minimax rate of a given problem with contamination consists of two terms: the statistical complexity without contamination, and the contamination effect in the form of modulus of continuity. In the second part of the talk, I will discuss computational challenges of these depth-based estimators.

Building
Room
175

Modified Multidimensional Scaling

Time
Speaker
Qiang Sun

Classical multidimensional scaling  is an important tool for data reduction in many applications. It takes in a distance matrix and outputs low-dimensional embedded samples  such that the pairwise distances between the original data points can be preserved, when treating them as deterministic points. However, data are often noisy in practice.   In such case, the quality of embedded samples produced by classical multidimensional scaling starts to break down, when either the ambient dimensionality or the noise variance gets larger.

Building
Room
175

Topological Data Analysis: Functional Summaries and Locating Cosmic Voids and Filament Loops

Time
Speaker
Jessi Cisewski Kehe

Data exhibiting complicated spatial structures are common in many areas of science (e.g. cosmology, biology), but can be difficult to analyze. Persistent homology is a popular approach within the area of Topological Data Analysis (TDA) that offers a way to represent, visualize, and interpret complex data by extracting topological features, which can be used to infer properties of the underlying structures. For example, TDA may be useful for analyzing the large-scale structure (LSS) of the Universe, which is an intricate and spatially complex web of matter.

Building
Room
175

Anchor Regression: Heterogeneous Data Meets Causality

Time
Speaker
Dominik Rothenhäusler

Many traditional statistical prediction methods mainly deal with the problem of overfitting to the given data set. On the other hand, there is a vast literature on the estimation of causal parameters for prediction under interventions. However, both types of estimators can perform poorly when used for prediction on heterogeneous data. We show that the change in loss under certain perturbations (interventions) can be written as a convex penalty. This motivates anchor regression, a “causal” regularization scheme that encourages the estimator to generalize well to perturbed data.

Building
Room
175

Graphical Characterizations of Adjustment Sets

Time
Speaker
Emilija Perkovic

Scientific research is often concerned with questions of cause and effect. For example, does eating processed meat cause certain types of cancer?  Ideally, such questions are answered by randomized controlled experiments.   However, these experiments can be costly, time-consuming, unethical or impossible to conduct. Hence, often the only available data to answer causal questions is observational.   
 

Building
Room
175

When Your Big Data Seems Too Small

Time
Speaker
Gregory Valiant

We discuss several problems related to the challenge of making accurate inferences about a complex phenomenon, given relatively little data.  We show that for several fundamental and practically relevant settings, including estimating the intrinsic dimensionality of a high-dimensional distribution, and learning a population of distributions given few data points from each distribution, it is possible to ``denoise'' the empirical distribution significantly.

Building
Room
175