JHN

Building Map

Hierarchical Priors for Bayesian Model Selection in Informative and Non-Informative Settings

Start Time
Speaker
Abel Rodriguez

Prior elicitation is a foundational problem in Bayesian statistics, particularly in the context of hypothesis testing and model selection.  On one end of the spectrum, it is well known that standard “non-informative” priors used for parameter estimation in contexts where little prior information is available can lead to ill-defined or inconsistent Bayes factors.  On the other end, ignoring structural information available in specific problems can lead to procedures with suboptimal (frequentist) properties.

Building
Room
175

Graphical Modeling of Local Independence in Dynamical Systems

Start Time
Speaker
Søren Wengel Mogensen

Local independence is an asymmetric notion of independence which describes how a system of stochastic processes (e.g. point processes or diffusions) evolves over time. Let A, B, and C be three subsets of the coordinate processes of the stochastic system. Intuitively speaking, B is locally independent of A given C if at every point in time knowing the past of both A and C is not more informative about the present of B than knowing the past of C only.

Building
Room
175

Center-Outward R-Estimation for Semiparametric VARMA Models

Start Time
Speaker
Marc Hallin

We propose a new class of estimators for semiparametric VARMA models with unspecified innovation density. Our estimators are based on the measure transportation-based  concepts of multivariate center-outward  ranks and signs. Root-n consistency and asymptotic normality are obtained under a broad class of innovation densities including, e.g.,  multimodal mixtures of Gaussians.

Building
Room
175

Adaptive Experimental Design for Multiple Testing and Best Identification

Start Time
Speaker
Kevin G. Jamieson

Adaptive experimental design (AED), or active learning, leverages already-collected data to guide future measurements, in a closed loop, to collect the most informative data for the learning problem at hand. In both theory and practice, AED can extract considerably richer insights than any measurement plan fixed in advance, using the same statistical budget.

Building
Room
175

Consistent Weighted Sampling, Min-Max Kernel, and Connections to Computing with Big Data

Start Time
Speaker
Ping Li

In this talk, I will introduce the ideas of min-max similarity (which can be viewed as a type of non-linear kernel) and consistent weighted sampling (CWS). These topics might be relatively new to the statistics community. In a paper in 2015, I demonstrated the surprisingly superb performance of min-max similarity in the context of kernel classification, compared to the standard linear or Gaussian kernels.

Building
Room
175

High-Dimensional Principal Component Analysis with Heterogeneous Missingness

Start Time
Speaker
Richard Samworth

We study the problem of high-dimensional Principal Component Analysis (PCA) with missing observations. In simple, homogeneous missingness settings with a noise level of constant order, we show that an existing inverse-probability weighted (IPW) estimator of the leading principal components can (nearly) attain the minimax optimal rate of convergence.

Building
Room
175

Diversity, Equity, and Inclusion Department Climate Study Results

Start Time
Speaker
UW CERSE team

The UW Center for Evaluation and Research for STEM Equity (CERSE) conducted a climate survey and focus groups in the Statistics Department over the past calendar year that involved students, faculty, and staff. In this seminar, several of CERSE's Research Scientists will share the results of the climate study. The results fall under the following themes: Inclusion; Community; Communication; Career & Advising Support. CERSE will conclude with some recommendations moving forward to foster an increasingly equitable departmental climate. 

Building
Room
175

Robust Estimation: Optimal Rates, Computation and Adaptation

Start Time
Speaker
Chao Gao

I will discuss the problem of statistical estimation with contaminated data. In the first part of the talk, I will discuss depth-based approaches that achieve minimax rates in various problems. In general, the minimax rate of a given problem with contamination consists of two terms: the statistical complexity without contamination, and the contamination effect in the form of modulus of continuity. In the second part of the talk, I will discuss computational challenges of these depth-based estimators.

Building
Room
175

Modified Multidimensional Scaling

Start Time
Speaker
Qiang Sun

Classical multidimensional scaling  is an important tool for data reduction in many applications. It takes in a distance matrix and outputs low-dimensional embedded samples  such that the pairwise distances between the original data points can be preserved, when treating them as deterministic points. However, data are often noisy in practice.   In such case, the quality of embedded samples produced by classical multidimensional scaling starts to break down, when either the ambient dimensionality or the noise variance gets larger.

Building
Room
175

Topological Data Analysis: Functional Summaries and Locating Cosmic Voids and Filament Loops

Start Time
Speaker
Jessi Cisewski Kehe

Data exhibiting complicated spatial structures are common in many areas of science (e.g. cosmology, biology), but can be difficult to analyze. Persistent homology is a popular approach within the area of Topological Data Analysis (TDA) that offers a way to represent, visualize, and interpret complex data by extracting topological features, which can be used to infer properties of the underlying structures. For example, TDA may be useful for analyzing the large-scale structure (LSS) of the Universe, which is an intricate and spatially complex web of matter.

Building
Room
175

Anchor Regression: Heterogeneous Data Meets Causality

Start Time
Speaker
Dominik Rothenhäusler

Many traditional statistical prediction methods mainly deal with the problem of overfitting to the given data set. On the other hand, there is a vast literature on the estimation of causal parameters for prediction under interventions. However, both types of estimators can perform poorly when used for prediction on heterogeneous data. We show that the change in loss under certain perturbations (interventions) can be written as a convex penalty. This motivates anchor regression, a “causal” regularization scheme that encourages the estimator to generalize well to perturbed data.

Building
Room
175

Graphical Characterizations of Adjustment Sets

Start Time
Speaker
Emilija Perkovic

Scientific research is often concerned with questions of cause and effect. For example, does eating processed meat cause certain types of cancer?  Ideally, such questions are answered by randomized controlled experiments.   However, these experiments can be costly, time-consuming, unethical or impossible to conduct. Hence, often the only available data to answer causal questions is observational.   
 

Building
Room
175

When Your Big Data Seems Too Small

Start Time
Speaker
Gregory Valiant

We discuss several problems related to the challenge of making accurate inferences about a complex phenomenon, given relatively little data.  We show that for several fundamental and practically relevant settings, including estimating the intrinsic dimensionality of a high-dimensional distribution, and learning a population of distributions given few data points from each distribution, it is possible to ``denoise'' the empirical distribution significantly.

Building
Room
175

Searching for Missing Heritability: A Closer Look at Methodological Issues

Start Time
Speaker
Saonli Basu

Fundamental to the study of the inheritance is the partitioning of the total phenotypic variation into genetic and environmental components. Using twin studies, the phenotypic variance-covariance matrix can be parameterized to include an additive genetic effect, shared and non-shared environmental effects. The ratio of the genetic variance component to the total phenotypic variance is the proportion of genetically controlled variation and is termed as the ‘narrow-sense heritability’.

Building
Room
175

On Functional Principal Component Regression

Start Time
Speaker
Chongzhi Di

Functional data analysis has been increasingly used in biomedical studies, where the basic unit of measurement is a function, curve, or image. For example, in mobile health (mHealth) studies, wearable sensors collect high-resolution trajectories of physiological and behavioral signals over time. Functional linear regression models are useful tools for quantifying the association between functional covariates and scalar/functional responses, where a popular approach is via functional principal component analysis.

Building
Room
175

Bayes Shrinkage at GWAS Scale: A Scalable Algorithm for the Horseshoe Prior with Theoretical Guarantees

Start Time
Speaker
James Johndrow

The horseshoe prior is frequently employed in Bayesian analysis of high-dimensional models, and has been shown to achieve minimax optimal risk properties when the truth is sparse. While optimization-based algorithms for the extremely popular Lasso and elastic net procedures can scale to dimension in the hundreds of thousands, algorithms for the horseshoe that use Markov chain Monte Carlo (MCMC) for computation are limited to problems an order of magnitude smaller. This is due to high computational cost per step and poor mixing of existing MCMC algorithms.

Building
Room
175

Markov Random Fields, Geostatistics, and Matrix-Free Computation

Start Time
Speaker
Debashis Mondal

Since their introduction in statistics through the seminal works of Julian Besag, Gaussian Markov random fields have become central to spatial statistics, with applications in agriculture, epidemiology, geology, image analysis and other areas of environmental science. Specified by a set of conditional distributions, these Markov random fields provide a very rich and flexible class of spatial processes, and their adaptability to fast statistical calculations, including those based on Markov chain Monte Carlo computations, makes them very attractive to statisticians.

Building
Room
175

Variational Analysis of Empirical Risk Minimization

Start Time
Speaker
Andrew Nobel

This talk presents a variational framework for the asymptotic analysis of empirical risk minimization in general settings.  In its most general form the framework concerns a two-stage inference procedure.  In the first stage of the procedure, an average loss criterion is used to fit the trajectory of an observed dynamical system with a trajectory of a reference dynamical system.  In the second stage of the procedure, a parameter estimate is obtained from the optimal trajectory of the reference system.

Building
Room
175

Stability

Start Time
Speaker
Bin Yu

Reproducibility is imperative for any scientific discovery. More often than not, modern scientific findings rely on statistical analysis of high-dimensional data. At a minimum, reproducibility manifests itself in stability of statistical results relative to "reasonable" perturbations to data and to the model used. Jacknife, bootstrap, and cross-validation are based on perturbations to data, while robust statistics methods deal with perturbations to models. In this talk, a case is made for the importance of stability in statistics.

Building
Room
102