30,573 Hits in 5.0 sec

High-Dimensional Robust Mean Estimation via Gradient Descent [article]

Yu Cheng, Ilias Diakonikolas, Rong Ge, Mahdi Soltanolkotabi
2020 arXiv   pre-print
We study the problem of high-dimensional robust mean estimation in the presence of a constant fraction of adversarial outliers.  ...  Our work establishes an intriguing connection between algorithmic high-dimensional robust statistics and non-convex optimization, which may have broader applications to other robust estimation tasks.  ...  In this paper, we show that this premise is true for the task of high-dimensional robust mean estimation.  ... 
arXiv:2005.01378v1 fatcat:djfkywaegbfetprnisg5nylxzm

Robustness and Tractability for Non-convex M-estimators

Ruizhi Zhang, Yajun Mei, Jianjun Shi, Huan Xu
2022 Statistica sinica  
Specifically, robustness means the statistical property that the estimator should always be close to the true underlying parameters regardless of the distribution of the outliers, and tractability indicates  ...  We further extend our analysis to the high-dimensional setting, where the number of parameters is greater than the number of samples, p n, and prove that when the proportion of outliers is small, the penalized  ...  ConclusionsIn this paper, we investigate the robustness and computational tractabil-ity of general (non-convex) M-estimators in both classical low-dimensional regime and modern high-dimensional regime.  ... 
doi:10.5705/ss.202019.0324 fatcat:nc2gslz5srhkpcuavilyvzvvxq

Boosting Based Conditional Quantile Estimation for Regression and Binary Classification [chapter]

Songfeng Zheng
2010 Lecture Notes in Computer Science  
Quantile Boost Regression (QBR) performs gradient descent in functional space to minimize the objective function used by quantile regression (QReg).  ...  In the classification scenario, the class label is defined via a hidden variable, and the quantiles of the class label are estimated by fitting the corresponding quantiles of the hidden variable.  ...  Compared to the mean value, quantiles are more robust to outliers [12] .  ... 
doi:10.1007/978-3-642-16773-7_6 fatcat:bksyzcfxmvfuvagycxrl2l3d2i

High Dimensional Robust M-Estimation: Arbitrary Corruption and Heavy Tails [article]

Liu Liu, Tianyang Li, Constantine Caramanis
2019 arXiv   pre-print
We define a natural condition we call the Robust Descent Condition (RDC), and show that if a gradient estimator satisfies the RDC, then Robust Hard Thresholding (IHT using this gradient estimator), is  ...  Specifically, new results include: (a) For k-sparse high-dimensional linear- and logistic-regression with heavy tail (bounded 4-th moment) explanatory and response variables, a linear-time-computable median-of-means  ...  In the high dimensional setting, we would require a high dimensional robust mean estimator in each gradient descent step.  ... 
arXiv:1901.08237v2 fatcat:6wnfahlsi5e3zkaxgxfgwkbhwy

Gradient Estimation with Simultaneous Perturbation and Compressive Sensing [article]

Vivek S. Borkar, Vikranth R. Dwaracherla, Neeraja Sahasrabudhe
2016 arXiv   pre-print
This paper aims at achieving a "good" estimator for the gradient of a function on a high-dimensional space.  ...  Application to estimating gradient outer product matrix as well as standard optimization problems are illustrated via simulations.  ...  high dimensional problems such as estimating EGOP in manifold learning where gradients are actually low-dimensional and gradient estimation is relevant.  ... 
arXiv:1511.08768v2 fatcat:r74o36jvqnagpc4igobmquzvuy

The landscape of empirical risk for nonconvex losses

Song Mei, Yu Bai, Andrea Montanari
2018 Annals of Statistics  
Robust regression: High-dimensional regime.  ...  Robust regression: Very high-dimensional regime.  ... 
doi:10.1214/17-aos1637 fatcat:646bhqjaovclbdqcsuof2esoam

Robust Structured Statistical Estimation via Conditional Gradient Type Methods [article]

Jiacheng Zhuo, Liu Liu, Constantine Caramanis
2020 arXiv   pre-print
Next, we consider high dimensional problems. Robust mean estimation based approaches may have an unacceptably high sample complexity.  ...  Combined with robust mean gradient estimation techniques, we can therefore guarantee robustness to a wide class of problems, but now in a projection-free algorithmic framework.  ...  The robust gradient descent methods [CSX17, YCRB18, PSBR18, DKK + 18] propose to use a robust mean estimator on the gradient samples, and then perform projected gradient descent.  ... 
arXiv:2007.03572v1 fatcat:rb7ovyxtonc2vpj6y55kc3rbpi

Robust Registration-Based Tracking by Sparse Representation with Model Update [chapter]

Peihua Li, Qilong Wang
2013 Lecture Notes in Computer Science  
Despite great advances achieved thus far, robust registration-based tracking in challenging conditions remains unsolved.  ...  For adaptation to dynamical scenarios, the mean vector and basis vectors of the appearance subspace are updated online by incremental SVD.  ...  Compared to the above work, our method essentially belongs to the deterministic, gradient descent family; the affine transformation is estimated via the objective function optimization rather than via  ... 
doi:10.1007/978-3-642-37431-9_16 fatcat:kndx4nzjqbfsbilaffsmdgjda4

Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization [article]

Samuel B. Hopkins, Jerry Li, Fred Zhang
2021 arXiv   pre-print
This connection allows us to avoid the technical complications in previous works and improve upon the run-time analysis of a gradient-descent-based algorithm for robust mean estimation by Cheng, Diakonikolas  ...  In this paper, we provide a meta-problem and a duality theorem that lead to a new unified view on robust and heavy-tailed mean estimation in high dimensions.  ...  Corollary 6.5 (heavy-tailed mean estimation via gradient descent). Assume the setting of Corollary 6.4.  ... 
arXiv:2007.15839v2 fatcat:bo54uigopfa75ohqa7sxkibxzq

A benchmark study on the efficiency of unconstrained optimization algorithms in 2D-aerodynamic shape design

L. Vorspel, M. Schramm, B. Stoevesandt, L. Brunold, M. Bünner, Pandian Vasant
2017 Cogent Engineering  
In this work, we benchmark several unconstrained optimization algorithms (Nelder-Mead, Quasi-Newton, steepest descent) under variation of gradient estimation schemes (adjoint equations, finite differences  ...  For intermediate and large number of design variables, gradient-based algorithms with gradient estimation through the solution of adjoint equations are unbeaten.  ...  The first two cases do not require gradient estimation. In two cases, gradient estimation via finite differences is used.  ... 
doi:10.1080/23311916.2017.1354509 fatcat:i52i7frahjhr7pyhhb4dm5dysq

Robust Estimation via Robust Gradient Estimation [article]

Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, Pradeep Ravikumar
2018 arXiv   pre-print
Our workhorse is a novel robust variant of gradient descent, and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem  ...  These results provide some of the first computationally tractable and provably robust estimators for these canonical statistical models.  ...  We summarize the overall robust gradient descent algorithm via gradient estimation in Algorithm 1.  ... 
arXiv:1802.06485v2 fatcat:gup5dafae5cj5knrmgyjdrtrg4

A Novel Gradient Descent Least Squares (GDLS) Algorithm for Efficient SMV Gridless Line Spectrum Estimation with Applications in Tomographic SAR Imaging [article]

Ruizhe Shi, Zhe Zhang, Xiaolan Qiu, Chibiao Ding
2022 arXiv   pre-print
Our proposed GDLS method reformulates the line spectrum estimations problem into a least squares (LS) estimation problem and solves the corresponding objective function via gradient descent algorithm in  ...  This paper presents a novel efficient method for gridless line spectrum estimation problem with single snapshot, namely the gradient descent least squares (GDLS) method.  ...  minimum norm problem with gradient descent, namely the gradient descent least squares (GDLS) algorithm.  ... 
arXiv:2203.08574v2 fatcat:6v7tsqcspbeppi3jnaww7kgzqe

The local low-dimensionality of natural images [article]

Olivier J. Hénaff, Johannes Ballé, Neil C. Rabinowitz, Eero P. Simoncelli
2015 arXiv   pre-print
We develop a new statistical model for photographic images, in which the local responses of a bank of linear filters are described as jointly Gaussian, with zero mean and a covariance that varies slowly  ...  We show that images can be reconstructed nearly perfectly from estimates of the local filter response covariances alone, and with minimal degradation (either visual or MSE) from low-rank approximations  ...  Via a descent method, we impose the covariance map estimated from an original image onto a second, noise image.  ... 
arXiv:1412.6626v4 fatcat:bal3xtkdx5b5vdjrwglh6kv23i

Efficient learning with robust gradient descent

Matthew J. Holland, Kazushi Ikeda
2019 Machine Learning  
Using high-probability bounds on the excess risk of this algorithm, we show that our update does not deviate far from the ideal gradient-based update.  ...  To achieve better performance under less stringent requirements, we introduce a procedure which constructs a robust approximation of the risk gradient for use in an iterative learning routine.  ...  Finally, conceptually the closest recent work to our research are those also analyzing novel "robust gradient descent" algorithms, namely steepest descent procedures which utilize a robust estimate of  ... 
doi:10.1007/s10994-019-05802-5 fatcat:c4332kpnrvbttji67atuk2jyey

Deep Signal Recovery with One-bit Quantization

Shahin Khobahi, Naveed Naimipour, Mojtaba Soltanalian, Yonina C. Eldar
2019 ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
The resulting network, which we refer to as DeepRec, can efficiently handle the recovery of high-dimensional signals from acquired one-bit noisy measurements.  ...  Interestingly, we fix the complexity budget of the inference framework (via fixing the number of layers), and apply the gradient descent method to yield the most accurate estimation of the parameter in  ...  Maximum Likelihood Estimator Derivation Given the knowledge of the sensing matrix H, noise covariance C, and the corresponding quantization thresholds τ , our goal is to recover the original (likely high-dimensional  ... 
doi:10.1109/icassp.2019.8683876 dblp:conf/icassp/KhobahiNSE19 fatcat:2h24c3dpwbbwvavtlddduzdczi
« Previous Showing results 1 — 15 out of 30,573 results