University of Michigan - Department of Statistics
Estimation of covariance matrices has a number of applications, including principal component analysis, classification by discriminant analysis, and inferring independence and conditional independence between variables, and the sample covariance matrix has many undesirable features in high dimensions unless regularized. Recent research mostly focused on regularization in situations where variables have a natural ordering. When no such ordering exists, regularization must be performed in a way that is invariant under variable permutations. This talk will discuss two new regularized sparse covariance estimators that are invariant to variable permutations: covariance thresholding and sparse estimation of the inverse covariance matrix using a penalized likelihood approach. We obtain convergence rates that make explicit the trade-offs between the dimension, the sample size, and the sparsity of the true model. We will also discuss a method for finding a â€œgoodâ€ ordering of the variables when it is not provided, based on the Isomap, a manifold projection algorithm.
The talk includes joint work with Adam Rothman, Amy Wagaman, Ji Zhu (University of Michigan) and Peter Bickel (UC Berkeley).