111 Hits in 4.7 sec

Interplay of minimax estimation and minimax support recovery under sparsity [article]

Mohamed Ndaoud
2018 arXiv   pre-print
In this paper, we study a new notion of scaled minimaxity for sparse estimation in high-dimensional linear regression model.  ...  Fixing the scale of the signal-to-noise ratio, we prove that the estimation error can be much smaller than the global minimax error.  ...  Acknowledgments We would like to thank Alexandre Tsybakov for valuable comments on early versions of this manuscript.  ... 
arXiv:1810.05478v1 fatcat:i6gypdmj6zhknlgnfekxo4uydm

Scaled minimax optimality in high-dimensional linear regression: A non-convex algorithmic regularization approach [article]

Mohamed Ndaoud
2020 arXiv   pre-print
Taking advantage of the interplay between estimation, support recovery and optimization we achieve both optimal statistical accuracy and fast convergence.  ...  Moreover, we establish sharp optimal results for both estimation and support recovery.  ...  I would like to thank Alexandre Tsybakov and Pierre Bellec for valuable comments on early versions of this manuscript. This work was partially supported by a James H.  ... 
arXiv:2008.12236v1 fatcat:5sb3c2qinvdmlab3tiz7menyeu

Optimal False Discovery Control of Minimax Estimator [article]

Qifan Song, Guang Cheng
2022 arXiv   pre-print
Two major research tasks lie at the heart of high dimensional data analysis: accurate parameter estimation and correct support recovery.  ...  under near-linear and linear sparsity settings.  ...  More specifically, within the near-linear sparsity regime, rate minimax estimator at best achieves almost full sparsity structure recovery.  ... 
arXiv:1812.10013v3 fatcat:n3sqcrsgo5a2pkf3xm5nzbqmca

Bayesian estimation of sparse signals with a continuous spike-and-slab prior

Veronika Ročková
2018 Annals of Statistics  
of sparsity).  ...  We introduce a new framework for estimation of normal means, bridging the gap between popular frequentist strategies (LASSO) and popular Bayesian strategies (spike-and-slab).  ...  The quality of recovery here will be assessed relative to the benchmark minimax risk 2 p n log(n/p n )(1 + o(1)) [12] and the "near-minimax" risk 2 p n log n(1 + o(1)).  ... 
doi:10.1214/17-aos1554 fatcat:q3tvvemvlfdwjmywqoxctohv5i

Linear Inverse Problems with Norm and Sparsity Constraints [article]

Volkan Cevher, Sina Jafarpour, Anastasios Kyrillidis
2015 arXiv   pre-print
The salient characteristics of these approaches is that they exploit the convex ℓ_1-ball and non-convex ℓ_0-sparsity constraints jointly in sparse recovery.  ...  To establish the theoretical approximation guarantees of GAME and CLASH, we cover an interesting range of topics from game theory, convex and combinatorial optimization.  ...  We also discovered that the interplay of these two-seemingly related-priors could lead to not only strong theoretical recovery guarantees from weaker assumptions than commonly used in sparse recovery,  ... 
arXiv:1507.05370v1 fatcat:7hthhp75nneurkj4liuepa6hf4

The Cost of Privacy in Generalized Linear Models: Algorithms and Minimax Lower Bounds [article]

T. Tony Cai, Yichen Wang, Linjun Zhang
2020 arXiv   pre-print
We propose differentially private algorithms for parameter estimation in both low-dimensional and high-dimensional sparse generalized linear models (GLMs) by constructing private versions of projected  ...  We show that the proposed algorithms are nearly rate-optimal by characterizing their statistical performance and establishing privacy-constrained minimax lower bounds for GLMs.  ...  Cai was supported in part by the National Science Foundation grants DMS-1712735 and DMS-2015259 and the National Institutes of Health grants R01-GM129781 and R01-GM123056. The research of L.  ... 
arXiv:2011.03900v2 fatcat:ynnepozcf5cn7jv2somcuhkx3u

Mathematical and Computational Foundations of Learning Theory (Dagstuhl Seminar 15361)

Matthias Hein, Gabor Lugosi, Lorenzo Rosasco, Marc Herbstritt
2016 Dagstuhl Reports  
The main topics of this seminar were: Interplay between Optimization and Learning, Learning Data Representations.  ...  The goal of the seminar was to bring together again experts from computer science, mathematics and statistics to discuss the state of the art in machine learning and identify and formulate the key challenges  ...  We would like to thank Dagmar Glaser and the staff at Schloss Dagstuhl for their continuous support and great hospitality which was the basis for the success of this seminar.  ... 
doi:10.4230/dagrep.5.8.54 dblp:journals/dagstuhl-reports/0001LR15 fatcat:u63lnb5j4ba3fdgvyes2u3woji

Covariate assisted screening and estimation

Zheng Tracy Ke, Jiashun Jin, Jianqing Fan
2014 Annals of Statistics  
We approach this problem by a new procedure called the covariate assisted screening and estimation (CASE).  ...  Consider a linear model Y=Xβ+z, where X=X_n,p and z∼ N(0,I_n). The vector β is unknown but is sparse in the sense that most of its coordinates are 0.  ...  The authors would like to thank Ning Hao, Philippe Rambour and David Siegmund for helpful pointers and comments.  ... 
doi:10.1214/14-aos1243 pmid:25541567 pmcid:PMC4274608 fatcat:ey5vhijkz5awfjsaqgj7tjuu7m

The Noise-Sensitivity Phase Transition in Compressed Sensing

David L. Donoho, Arian Maleki, Andrea Montanari
2011 IEEE Transactions on Information Theory  
The focus of the present paper is on computing the minimax formal MSE within the class of sparse signals x 0 . 1 1 Here and below we write a ∼ b if a/b → 1 as both quantities tend to infinity.  ...  Other papers by the authors detail expressions for the formal MSE of AMP and its close connection to 1 -penalized reconstruction.  ...  Work partially supported by NSF DMS-0505303, NSF DMS-0806211, NSF CAREER CCF-0743978. Thanks to Iain Johnstone and Jared Tanner for helpful discussions.  ... 
doi:10.1109/tit.2011.2165823 fatcat:cheheznymvd75memxzrlgodzpq

Tight conditions for consistency of variable selection in the context of high dimensionality

Laëtitia Comminges, Arnak S. Dalalyan
2012 Annals of Statistics  
We apply these results to derive minimax separation rates for the problem of variable  ...  Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables.  ...  A simple consequence of inequalities (19) and (20) is that the consistent recovery of the sparsity pattern is possible under the condition d * / log n → 0 and impossible for d * / log n → ∞ as n →  ... 
doi:10.1214/12-aos1046 fatcat:fb77qfkpc5c5hmkcfr622ddhhu

An Optimal Reduction of TV-Denoising to Adaptive Online Learning [article]

Dheeraj Baby and Xuandong Zhao and Yu-Xiang Wang
2021 arXiv   pre-print
rate of Õ (n^1/3C_n^2/3) under squared error loss.  ...  We reveal a deep connection to the seemingly disparate problem of Strongly Adaptive online learning (Daniely et al, 2015) and provide an O(n log n) time algorithm that attains the near minimax optimal  ...  In summary, the delicate interplay between Strongly Adaptive regret bounds and properties of the partition we exhibit leads to the adaptively minimax optimal estimation rate for ALIGATOR.  ... 
arXiv:2101.09438v2 fatcat:pvjqmdemrbfedmw72nzddej2xa

Recovering block-structured activations using compressive measurements

Sivaraman Balakrishnan, Mladen Kolar, Alessandro Rinaldo, Aarti Singh
2017 Electronic Journal of Statistics  
In compressed sensing, it has been shown that, at least in a minimax sense, for both detection and support recovery, adaptivity and contiguous structure only reduce signal strength requirements by logarithmic  ...  We consider the problems of detection and support recovery of a contiguous block of weak activation in a large matrix, from a small number of noisy, possibly adaptive, compressive (linear) measurements  ...  This research is supported in part by AFOSR under grant FA9550-10-1-0382, NSF under grant IIS-1116458 and NSF CAREER grant DMS 1149677.  ... 
doi:10.1214/17-ejs1267 fatcat:fg6hmyxgerblrmnqso3pazcunq

Optimal detection of sparse principal components in high dimension

Quentin Berthet, Philippe Rigollet
2013 Annals of Statistics  
Our minimax optimal test is based on a sparse eigenvalue statistic.  ...  We perform a finite sample analysis of the detection levels for sparse principal components of a high-dimensional covariance matrix.  ...  Indeed, Amini and Wainwright (2009) prove optimal rates of support recovery when θ is known and large enough, and for v taking only values in {0, ±1/ √ k}.  ... 
doi:10.1214/13-aos1127 fatcat:vij6wvcynjaolgym2cowi3bgx4

On lower bounds for the bias-variance trade-off [article]

Alexis Derumigny, Johannes Schmidt-Hieber
2021 arXiv   pre-print
Although there is a non-trivial interplay between bias and variance, the rate of the squared bias and the variance do not have to be balanced in order to achieve the minimax estimation rate.  ...  It is a common phenomenon that for high-dimensional and nonparametric statistical models, rate-optimal estimators balance squared bias and variance.  ...  We are grateful to Ming Yuan for helpful discussions during an early stage of the project and to Zijian Guo for pointing us to the article [13] .  ... 
arXiv:2006.00278v3 fatcat:hw4fzltegvg6nh264zpi7fxhae

Statistical and computational trade-offs in estimation of sparse principal components

Tengyao Wang, Quentin Berthet, Richard J. Samworth
2016 Annals of Statistics  
We also study the theoretical performance of a (polynomial time) variant of the well-known semidefinite relaxation estimator, revealing a subtle interplay between statistical and computational efficiency  ...  An impressive range of estimators have been proposed; some of these are fast to compute, while others are known to achieve the minimax optimal rate over certain Gaussian or sub-Gaussian classes.  ...  We thank the anonymous reviewers for helpful and constructive comments on an earlier draft. SUPPLEMENTARY MATERIAL  ... 
doi:10.1214/15-aos1369 fatcat:3db4fwl4zzgo3ffhntyh7vanvy
« Previous Showing results 1 — 15 out of 111 results