Bayes Shrinkage at GWAS Scale: A Scalable Algorithm for the Horseshoe Prior with Theoretical Guarantees

Speaker
James Johndrow

The horseshoe prior is frequently employed in Bayesian analysis of high-dimensional models, and has been shown to achieve minimax optimal risk properties when the truth is sparse. While optimization-based algorithms for the extremely popular Lasso and elastic net procedures can scale to dimension in the hundreds of thousands, algorithms for the horseshoe that use Markov chain Monte Carlo (MCMC) for computation are limited to problems an order of magnitude smaller. This is due to high computational cost per step and poor mixing of existing MCMC algorithms. We propose new MCMC algorithms for computation in these models that have improved performance. One of the algorithms also approximates an expensive matrix product to give orders of magnitude speedup in high-dimensional applications. We prove that the exact algorithm is geometrically ergodic, and give guarantees for the accuracy of the approximate algorithm using perturbation theory. Versions of the approximation algorithm that gradually decrease the approximation error as the chain extends are shown to be exact. The scalability of the algorithm is illustrated in simulations with problem size as large as N=5,000 observations and p=50,000 predictors, and an application to a genome wide association study with N=2,267 and p=98,385. The empirical results also show that the new algorithm yields estimates with lower mean squared error, intervals with better coverage, and elucidates features of the posterior that were often missed by previous algorithms in high dimensions, including bimodality of posterior marginals indicating uncertainty about which covariates belong in the model.

Summary Image