Filters








9,273 Hits in 7.3 sec

Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The Case of Distributed Linear Regression Problem

Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra
2020 IEEE Control Systems Letters  
In noise-free systems, the recently proposed distributed linear regression algorithm, named the Iteratively Pre-conditioned Gradient-descent (IPG) method, has been claimed to converge faster than related  ...  This paper considers the problem of multi-agent distributed linear regression in the presence of system noises.  ...  (APC) [1] , the quasi-Newton BFGS method [12] , and a recently proposed iteratively pre-conditioned gradient-descent (IPG) method [13] .  ... 
doi:10.1109/lcsys.2020.3045533 fatcat:5bspz2k6mzaqle4ppolej3toty

Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem [article]

Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra
2021 arXiv   pre-print
We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method.  ...  We rigorously show that the resulting pre-conditioned gradient-descent method, with the proposed iterative pre-conditioning, achieves superlinear convergence when the least-squares problem has a unique  ...  Acknowledgements This work is being carried out as a part of the Pipeline System Integrity Management Project, which is supported by the Petroleum Institute, Khalifa University of Science and Technology  ... 
arXiv:2008.02856v2 fatcat:qebraytmqvgkbo5lijvn5it2xe

Projected Gradient Method for Decentralized Optimization over Time-Varying Networks [article]

Alexander Rogozin, Alexander Gasnikov
2020 arXiv   pre-print
In this paper, we improve the rate of geometric convergence in the latter paper for the considered class of problems, using an original penalty method trick and robustness of projected gradient descent  ...  The first results about the possibility of geometric rates of convergence for strongly convex smooth optimization problems on such networks were obtained only two years ago (Nedic, 2017).  ...  Note that proposed analysis of external non-accelerated gradient descent method can be generalized for the case of accelerated gradient method. We plan to do it in subsequent works.  ... 
arXiv:1911.08527v4 fatcat:i6vhn6egqffn3ftbgt25m7ouwe

Supervised Descent Method for Solving Nonlinear Least Squares Problems in Computer Vision [article]

Xuehan Xiong, Fernando De la Torre
2014 arXiv   pre-print
It is generally accepted that second order descent methods are the most robust, fast, and reliable approaches for nonlinear optimization of a general smooth function.  ...  Using generic descent maps, we derive a practical algorithm - Supervised Descent Method (SDM) - for minimizing Nonlinear Least Squares (NLS) problems.  ...  Fig. 8 shows the Cumulative Error Distribution (CED) curves of SDM, Belhumeur et al. [33] , and our method trained with only one linear regression.  ... 
arXiv:1405.0601v1 fatcat:5xdp6w7mbfat3bzmmftsfazlie

Stochastic gradient descent methods for estimation with large data sets [article]

Dustin Tran, Panos Toulis, Edoardo M. Airoldi
2015 arXiv   pre-print
Our sgd package in R offers the most extensive and robust implementation of stochastic gradient descent methods.  ...  Our applications include the wide class of generalized linear models as well as M-estimation for robust regression.  ...  Generalized linear models In the family of generalized linear models (GLMs), the outcome y n ∈ R follows an exponential family distribution conditional on x n , y n | x n ∼ exp 1 ψ (η n y n − b(η n ))  ... 
arXiv:1509.06459v1 fatcat:6mbwz7qi3beovn5bppoc3j4jaa

L1‎‎ Norm Based Data Analysis and Related Methods

Bijan Bidabad
2019 Australian Finance & Banking Review  
This paper gives a rather general view on the L1‎‎ norm criterion on the area of data analysis and related topics.  ...  We tried to cover all aspects of mathematical properties, historical development, computational algorithms, simultaneous equations estimation, statistical modeling, and application of the L1‎‎ norm in  ...  The reduced gradient algorithm is a special case of descent method, which possesses two important characteristics.  ... 
doi:10.46281/afbr.v3i1.317 fatcat:jpc6nzznazbt5ibqjh45g3od3m

An Extragradient-Based Alternating Direction Method for Convex Minimization [article]

Tianyi Lin and Shiqian Ma and Shuzhong Zhang
2015 arXiv   pre-print
In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints.  ...  Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an ϵ-optimal solution within O(1/ϵ) iterations.  ...  We are also grateful to two anonymous referees for their constructive comments that have helped improve the presentation of this paper greatly.  ... 
arXiv:1301.6308v3 fatcat:losb3xld5jarjnc6if3w2fa2qa

An Extragradient-Based Alternating Direction Method for Convex Minimization

Tianyi Lin, Shiqian Ma, Shuzhong Zhang
2015 Foundations of Computational Mathematics  
In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints.  ...  Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an ϵ-optimal solution within O(1/ϵ) iterations.  ...  We are also grateful to two anonymous referees for their constructive comments that have helped improve the presentation of this paper greatly.  ... 
doi:10.1007/s10208-015-9282-8 fatcat:ufmyeqyxibam3daaxpepakecfu

Dimension Reduction Using Rule Ensemble Machine Learning Methods: A Numerical Study of Three Ensemble Methods [article]

Orianna DeMasi, Juan Meza, David H. Bailey
2011 arXiv   pre-print
We also compare the rule ensemble method on a set of multi-class problems with boosting and bagging, which are two well known ensemble techniques that use decision trees as base learners, but do not have  ...  An example of the rule ensemble method successfully ranking rules and selecting attributes is given with a dataset containing images of potential supernovas where the number of necessary features is reduced  ...  This method is similar to the method of regressing on residuals in multidimensional linear regression. Using pseudo residuals also provides another termination condition.  ... 
arXiv:1108.6094v1 fatcat:xoo6k5q4arefjinhwa4ijcnihm

An Experimental Evaluation of Boosting Methods for Classification

R. Stollhoff, W. Sauerbrei, M. Schumacher
2010 Methods of Information in Medicine  
Methods: Using data from a clinical study on the diagnosis of breast tumors and by simulation we will compare AdaBoost with gradient boosting ensembles of regression trees.  ...  Conclusions: In medical applications, the logistic regression model remains a method of choice or, at least, a serious competitor of more sophisticated techniques.  ...  Also using more robust base classifiers [44] or a stochastic gradient descent [45] can improve the performance of boosting algorithms.  ... 
doi:10.3414/me0543 pmid:20135078 fatcat:nw42mzdl5vaavlt7lwqqtnsoua

SpaGrOW—A Derivative-Free Optimization Scheme for Intermolecular Force Field Parameters Based on Sparse Grid Methods

Marco Hülsmann, Dirk Reith
2013 Entropy  
Conflicts of Interest The authors declare no conflict of interest.  ...  Acknowledgements We are grateful to Janina Hemmersbach for the detailed analysis and selection of appropriate smoothing procedures, as well as to Anton Schüller for fruitful discussions and advice.  ...  determining the descent direction and, then, searching for a reliable step length. • Robustness: SpaGrOW exhibits a slightly lower robustness than the gradient-based methods.  ... 
doi:10.3390/e15093640 fatcat:2jqpdnp3tng4rjgjpnivqsl7ba

New insights and perspectives on the natural gradient method [article]

James Martens
2020 arXiv   pre-print
Natural gradient descent is an optimization method traditionally motivated from the perspective of information geometry, and works well for many applications as an alternative to stochastic gradient descent  ...  Additionally, we make a series of contributions to the understanding of natural gradient and 2nd-order methods, including: a thorough analysis of the convergence speed of stochastic natural gradient descent  ...  Acknowledgments We gratefully acknowledge support from Google, DeepMind, and the University of Toronto.  ... 
arXiv:1412.1193v11 fatcat:ptvfze3nxzc73izt3pbbfg7ofq

SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization [article]

Hoi-To Wai, Nikolaos M. Freris, Angelia Nedic, Anna Scaglione
2018 arXiv   pre-print
For strongly convex problems, we establish linear convergence for the SUCAG method.  ...  When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the  ...  The authors would like to thank Prof. Maxim Raginsky for pointing out the reference [29].  ... 
arXiv:1803.08198v2 fatcat:gjgoc3dxnzdszeck4gqa65skvu

A Newton's Method for Benchmarking Time Series According to a Growth Rates Preservation Principle

Marco Marini, Tommaso Di Fonzo
2011 IMF Working Papers  
A procedure is developed which (i) transforms the original constrained problem into an unconstrained one, and (ii) applies a Newton's method exploiting the analytic Hessian of the GRP objective function  ...  We show that the proposed technique is easy to implement, computationally robust and efficient, all features which make it a plausible competitor of other benchmarking procedures (Denton, 1971; Dagum and  ...  and the non-linear conjugate gradient (CG) algorithms, respectively, to solve the above NLP problem.  ... 
doi:10.5089/9781462311293.001 fatcat:cxw2ixu4prh73cy4a6vgmufc5e

A Novel Sequential Coreset Method for Gradient Descent Algorithms [article]

Jiawei Huang, Ruomin Huang, Wenjie Liu, Nikolaos M. Freris, Hu Ding
2021 arXiv   pre-print
A wide range of optimization problems arising in machine learning can be solved by gradient descent algorithms, and a central question in this area is how to efficiently compress a large-scale dataset  ...  However, most of existing coreset methods are problem-dependent and cannot be used as a general tool for a broader range of applications.  ...  The method of steepest descent for non-linear minimiza- tion problems. Quart. Appl.  ... 
arXiv:2112.02504v1 fatcat:ca6ik4vfgfgqrmoqottujcbhwa
« Previous Showing results 1 — 15 out of 9,273 results