Seminar Details

Seminar Details


Oct 27

3:30 pm

Sparse Bayes Estimation in Non-Gaussian Models via Data Augmentation

Nicholas G. Polson


- Chicago Booth School of Business

In this paper we provide a data-augmentation scheme that unifies many common sparse Bayes estimators into a single class. This leads to simple iterative algorithms for estimating the posterior mode under arbitrary combinations of likelihoods and priors within the class. The class itself is quite large: for example, it includes quantile regression, support vector machines, and logistic and multinomial logistic regression, along with the usual ridge regression, lasso, bridge estimators, and regression with heavy-tailed errors. To arrive at this unified framework, we represent a wide class of objective functions as variance–mean mixtures of Gaussians involving both the likelihood and penalty functions. This generalizes existing theory based solely on variance mixtures for the penalty function, and allows the theory of conditionally normal linear models to be brought to bear on a much wider class of models. We focus on two possible choices of the mixing measures: the generalized inverse-Gaussian and Polya distributions, leading to the hyperbolic and Z distributions, respectively. We exploit this conditional normality to find sparse, regularized estimates using tilted iteratively re-weighted least squares (TIRLS). Finally, we characterize the conditional moments of the latent variances for any model in our proposed class, and show the relationship between our method and two recent algorithms: LQA (local quadratic approximation) and LLA (local linear approximation). Keywords: variance–mean Gaussian mixtures, sparse regression, classification, data augmentation, quantile regression, support vector machines