On the sparse Bayesian learning of linear models
Communications in Statistics - Theory and Methods
This work is a re-examination of the sparse Bayesian learning (SBL) of linear regression models of Tipping (2001) in a high-dimensional setting with a sparse signal. We show that in general the SBL estimator does not recover the sparsity structure of the signal. To remedy this, we propose a hard-thresholded version of the SBL estimator that achieves, for orthogonal design matrices, the nonasymptotic estimation error rate of σ √ s log p/ √ n, where n is the sample size, p the number of
... number of regressors, σ is the regression model standard deviation, and s the number of non-zero regression coefficients. We also establish that with high-probability the estimator recovers the sparsity structure of the signal. In our simulations we found that the performance of thresholded SBL depends on the strength of the signal. With a weak signal thresholded SBL performs poorly compared to lasso (Tibshirani (1996) ), but outperforms lasso when the signal is strong. 2000 Mathematics Subject Classification. 60F15, 60G42.