Filters








4,409 Hits in 3.0 sec

Nonconvex Variance Reduced Optimization with Arbitrary Sampling [article]

Samuel Horváth, Peter Richtárik
2019 arXiv   pre-print
We provide the first importance sampling variants of variance reduced algorithms for empirical risk minimization with non-convex loss functions.  ...  All the above results follow from a general analysis of the methods which works with arbitrary sampling, i.e., fully general randomized strategy for the selection of subsets of examples to be sampled in  ...  Variance-reduced methods A particularly important recent advance has to do with the design of variance-reduced (VR) stochastic gradient methods, such as SAG (Roux et al., 2012) , SDCA (Shalev-Shwartz  ... 
arXiv:1809.04146v2 fatcat:vajcltes5nenzbhd23fjsswuem

Momentum Schemes with Stochastic Variance Reduction for Nonconvex Composite Optimization [article]

Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh
2019 arXiv   pre-print
Two new stochastic variance-reduced algorithms named SARAH and SPIDER have been recently proposed, and SPIDER has been shown to achieve a near-optimal gradient oracle complexity for nonconvex optimization  ...  However, existing momentum schemes used in variance-reduced algorithms are designed specifically for convex optimization, and are not applicable to nonconvex scenarios.  ...  To the best of our knowledge, this is the first known theoretical guarantee for stochastic variance-reduced type of algorithms with momentum in nonconvex optimization.  ... 
arXiv:1902.02715v3 fatcat:arhpvqvorngv3pqlktcslgkbbu

Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems

Junchi Yang, Negar Kiyavash, Niao He
2020 Neural Information Processing Systems  
We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.  ...  Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex-concave setting.  ...  Variance reduced algorithm.  ... 
dblp:conf/nips/YangKH20 fatcat:buszxqnfkjh3pbjwdvtathpx5m

ZEROTH-ORDER STOCHASTIC PROJECTED GRADIENT DESCENT FOR NONCONVEX OPTIMIZATION

Sijia Liu, Xingguo Li, Pin-Yu Chen, Jarvis Haupt, Lisa Amini
2018 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)  
Contributions We propose and evaluate a novel ZO algorithm for nonconvex stochastic optimization, ZO-SVRG, which integrates SVRG with ZO gradient estimators.  ...  This paper addresses these challenges by presenting: a) a comprehensive theoretical analysis of variance reduced zeroth-order (ZO) optimization, b) a novel variance reduced ZO algorithm, called ZO-SVRG  ...  In this paper, we study the problem of design and analysis of variance reduced and faster converging nonconvex ZO optimization methods.  ... 
doi:10.1109/globalsip.2018.8646618 dblp:conf/globalsip/0001LCHA18 fatcat:23chudlyinh7pobzqi6ycds6di

Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization [article]

Qunwei Li, Yi Zhou, Yingbin Liang, Pramod K. Varshney
2017 arXiv   pre-print
We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the property.  ...  Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee.  ...  Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization  ... 
arXiv:1705.04925v1 fatcat:zkcczurf7fb23fw6c5echbf4ce

Generalization Error Bounds for Optimization Algorithms via Stability

Qi Meng, Yue Wang, Wei Chen, Taifeng Wang, Zhi-Ming Ma, Tie-Yan Liu
2017 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
and stochastic variance reduction (SVRG).  ...  In particular, we decompose the generalization error for R-ERM, and derive its upper bound for both convex and nonconvex cases.  ...  In order to reduce the variance in SGD, SVRG divides the optimization process into multiple stages and updates the model towards a direction of the gradient at a randomly sampled instance regularized by  ... 
doi:10.1609/aaai.v31i1.10919 fatcat:vx4wzrfskbcqjd6f77djirstoi

Distributed black-box optimization of nonconvex functions

Sergio Valcarcel Macua, Santiago Zazo, Javier Zazo
2015 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
We combine model-based methods and distributed stochastic approximation to propose a fully distributed algorithm for nonconvex optimization, with good empirical performance and convergence guarantees.  ...  This is especially relevant when many samples are needed (e.g., for high-dimensional objectives) or when evaluating each sample is costly.  ...  Together with the robustness, reduced bias and variance, increased stability against node failure and intrinsic privacy (due to never exchanging local samples) of the diffusion-like distributed algorithms  ... 
doi:10.1109/icassp.2015.7178640 dblp:conf/icassp/MacuaZZ15 fatcat:vmzzvmrpvzbmdbcnnh7hwwnxf4

Sample Efficient Stochastic Variance-Reduced Cubic Regularization Method [article]

Dongruo Zhou, Pan Xu, Quanquan Gu
2018 arXiv   pre-print
We propose a sample efficient stochastic variance-reduced cubic regularization (Lite-SVRC) algorithm for finding the local minimum efficiently in nonconvex optimization.  ...  Numerical experiments with different nonconvex optimization problems conducted on real datasets validate our theoretical results.  ...  Previous analysis of variance-reduced method in nonconvex setting for first-order method which only focus on the upper bound of variance of e v gives E e v 2 2 an upper bound only associated with x s t  ... 
arXiv:1811.11989v1 fatcat:74p66hp2l5aylhowrk5srdrxlu

Generalization Error Bounds for Optimization Algorithms via Stability [article]

Qi Meng, Yue Wang, Wei Chen, Taifeng Wang, Zhi-Ming Ma, Tie-Yan Liu
2016 arXiv   pre-print
and stochastic variance reduction (SVRG).  ...  Our theorems indicate that 1) along with the training process, the generalization error will decrease for all the optimization algorithms under our investigation; 2) Comparatively speaking, SVRG has better  ...  In order to reduce the variance in SGD, SVRG divides the optimization process into multiple stages and updates the model towards a direction of the gradient at a randomly sampled instance regularized by  ... 
arXiv:1609.08397v1 fatcat:xk4edyvppfaytmqn34dhuufile

Global Optimization Methods for Extended Fisher Discriminant Analysis

Satoru Iwata, Yuji Nakatsukasa, Akiko Takeda
2014 International Conference on Artificial Intelligence and Statistics  
We speed up the algorithm by taking advantage of the matrix structure and proving that a globally optimal solution is a KKT point with the smallest Lagrange multiplier, which can be computed efficiently  ...  Numerical experiments illustrate the efficiency and effectiveness of the extended FDA model combined with our algorithm.  ...  of training samples with labels +1 and −1, respectively.  ... 
dblp:conf/aistats/IwataNT14 fatcat:3wv4gftanzaetcnzq5yie6i6ly

Client Selection in Nonconvex Federated Learning: Improved Convergence Analysis for Optimal Unbiased Sampling Strategy [article]

Lin Wang, YongXin Guo, Tao Lin, Xiaoying Tang
2022 arXiv   pre-print
FedSRC-D is provable the optimal unbiased sampling in non-convex settings for non-IID FL with respect to the given bounds.  ...  Moreover, based on our convergence analysis, we give a novel unbiased sampling strategy, i.e., FedSRC-D, whose sampling probability is proportional to the client's gradient diversity and local variance  ...  FedSRC-D reduces variance terms more efficiently since it can reduce all variance terms in Φ, while in Φ only last term V ar( 1 mpi ĝi ) can be optimized w.r.t p i .  ... 
arXiv:2205.13925v1 fatcat:m3tlh7oufjgzzan7galm2bqzoy

Sparse Blind Deconvolution with Nonconvex Optimization for Ultrasonic NDT Application

Xuyang Gao, Yibing Shi, Kai Du, Qi Zhu, Wei Zhang
2020 Sensors  
This letter introduces the l1/l2 ratio regularization function to model the deconvolution as a nonconvex optimization problem.  ...  Compared with conventional blind deconvolution algorithms, the proposed methods demonstrate the robustness and capability of separating overlapping echoes in the context of synthetic experiments.  ...  The additive noise is considered as white Gaussian noise with the variance .  ... 
doi:10.3390/s20236946 pmid:33291739 pmcid:PMC7730569 fatcat:7agcr4t35fhqpp3x3x2zkone5y

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization [article]

Pan Xu and Jinghui Chen and Difan Zou and Quanquan Gu
2020 arXiv   pre-print
Our theoretical analyses shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees.  ...  We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions.  ...  variance in SGLD, which can be reduced with larger batch size B.  ... 
arXiv:1707.06618v3 fatcat:ron6imxljbgwrelspvctnkyuey

ADMM-based Adaptive Sampling Strategy for Nonholonomic Mobile Robotic Sensor Networks

Viet-Anh Le, Linh Nguyen, Truong X. Nghiem
2021 IEEE Sensors Journal  
In order to tackle the nonlinearity and nonconvexity of the objective function in the optimization problem we first exploit the linearized alternating direction method of multipliers (L-ADMM) method that  ...  The control, movement and nonholonomic dynamics constraints of the mobile sensors are also considered in the adaptive sampling optimization problem.  ...  The proposed approach can significantly reduces the computation time in solving the nonconvex optimization problem (10a) as compared with the L-ADMM algorithm.  ... 
doi:10.1109/jsen.2021.3072390 fatcat:z4ttj7bexnf35c6pck7qbn77ta

Robust adaptive beamforming using worst-case performance optimization: a solution to the signal mismatch problem

S.A. Vorobyov, A.B. Gershman, Z.-Q. Luo
2003 IEEE Transactions on Signal Processing  
Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms  ...  The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small.  ...  CONCLUSIONS A new adaptive beamformer with an improved robustness against an arbitrary unknown signal steering vector mismatch has been proposed.  ... 
doi:10.1109/tsp.2002.806865 fatcat:kswftf4hnndp5mpmrhubmtrly4
« Previous Showing results 1 — 15 out of 4,409 results