The primal-dual witness (PDW) technique is a now-standard proof strategy for establishing variable selection consistency for sparse high-dimensional estimation problems when the objective function and regularizer are convex. The method proceeds by optimizing the objective function over the parameter space restricted to the true support of the unknown vector, then using a dual witness to certify that the resulting solution is also a global optimum of the unrestricted problem. We present a modified primal-dual witness framework that may be applied even to nonconvex, penalized objective functions that satisfy certain regularity conditions. Notably, our theory allows us to derive rigorous support recovery guarantees for local and global optima of regularized regression estimators involving nonconvex penalties such as the SCAD and MCP, which do not involve the restrictive incoherence conditions from Lasso-based theory. Projected gradient descent methods may be used to obtain these optima in an efficient manner.
This is joint work with Martin Wainwright.
(Coffee, tea, and cookies will be served after the seminar in the Statistics Lounge, Padelford B-302)