Filters








1,132 Hits in 4.7 sec

A Fast Algorithm for Separated Sparsity via Perturbed Lagrangians [article]

Aleksander Mądry, Slobodan Mitrović, Ludwig Schmidt
2017 arXiv   pre-print
While prior algorithms for computing a projection onto this constraint set required quadratic time, we provide a perturbed Lagrangian relaxation approach that computes provably exact projection in only  ...  Although the sparsity constraint is non-convex, our perturbed Lagrangian approach is still guaranteed to find a globally optimal solution.  ...  We thank Arturs Backurs for insightful discussions. L. Schmidt thanks Chinmay Hegde for providing a dataset for some of our experiments. A. Mądry was supported in part by an Alfred. P.  ... 
arXiv:1712.08130v1 fatcat:gq2klxmtujd5dpq35j4uz7i3ie

Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

Hao Zhu, Geert Leus, Georgios B. Giannakis
2011 IEEE Transactions on Signal Processing  
Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch  ...  However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients.  ...  The -norm based formulation accounting for the said perturbations is known as basis pursuit (BP) [11] , and the corresponding convex problem written in its Lagrangian form is: , where is a sparsity-tuning  ... 
doi:10.1109/tsp.2011.2109956 fatcat:4uwyrlh26bhqlduppj2roibaqi

A Complete Analysis of the l_1,p Group-Lasso [article]

Julia Vogt
2012 arXiv   pre-print
For all p-norms, a highly efficient projected gradient algorithm is presented.  ...  We characterize conditions for solutions of the l_1,p Group-Lasso for all p-norms with 1 <= p <= ∞, and we present a unified active set algorithm.  ...  Adapt Lagrangian multiplier µ via interval bisection.  ... 
arXiv:1206.4632v1 fatcat:hd7ftrc7fvezxhg6g75adrvpq4

Fast and Stable Signal Deconvolution via Compressible State-Space Models

Abbas Kazemipour, Ji Liu, Krystyna Solarana, Daniel A. Nagode, Patrick O. Kanold, Min Wu, Behtash Babadi
2018 IEEE Transactions on Biomedical Engineering  
Under suitable sparsity assumptions on the dynamics, we prove optimal stability guarantees for the recovery of the states and present a method for the identification of the underlying discrete events with  ...  We consider a dynamic compressive sensing optimization problem and develop a fast solution, using two nested Expectation Maximization algorithms, to jointly estimate the states as well as their transition  ...  The Lagrangian formulation allows us to adaptively select the sparsity level in a data-driven fashion: by choosing the trade-off parameter λ via cross-validation, the sparsity level of the reconstructed  ... 
doi:10.1109/tbme.2017.2694339 pmid:28422648 pmcid:PMC5683949 fatcat:jltmxhudyncfnfuzcelvqyj3pe

Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets [article]

Thu Dinh, Bao Wang, Andrea L. Bertozzi, Stanley J. Osher
2020 arXiv   pre-print
In this paper, we focus on a co-design of efficient DNN compression algorithms and sparse neural architectures for robust and accurate deep learning.  ...  Such a co-design enables us to advance the goal of accommodating both sparsity and robustness.  ...  For all the experiments below, we run 200 epochs of the PGD (10 iterations of the iterative fast gradient sign method (IFGSM 10 ) with α = 2/255 and = 8/255, and an initial random perturbation of magnitude  ... 
arXiv:2003.00631v1 fatcat:fxqyasu6fbf6haseckqbdbgidm

FeaFiner

Jiayu Zhou, Zhaosong Lu, Jimeng Sun, Lei Yuan, Fei Wang, Jieping Ye
2013 Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '13  
We apply a recently developed augmented Lagrangian method to solve this formulation in which each subproblem is solved by a non-monotone spectral projected gradient method.  ...  Specifically, we formulate a double sparsity optimization problem that identifies groups in the low-level features, generalizes higher level features using the groups and performs feature selection.  ...  Specifically, we propose a formulation for learning a sparse group structure matrix and a sparse prediction model via ℓ1-regularization, and adopt an efficient block coordinate descent algorithm for solving  ... 
doi:10.1145/2487575.2487671 dblp:conf/kdd/ZhouLSYWY13 fatcat:hjg2toiyqnhmfldusltwqnwwuu

Background Subtraction via Fast Robust Matrix Completion

Behnaz Rezaei, Sarah Ostadabbas
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
In modeling the background, we benefited from the in-face extended Frank-Wolfe algorithm for solving a defined convex optimization problem.  ...  In this regard, our paper addresses the problem of background modeling in a computationally efficient way, which is important for current eruption of "big data" processing coming from high resolution multi-channel  ...  Inexact augmented Lagrangian multiplier (IALM) is a successful effort in this regard for solving the problem of RPCA based on the augmented Lagrangian multiplier (ALM) algorithm without taking the unnecessary  ... 
doi:10.1109/iccvw.2017.221 dblp:conf/iccvw/RezaeiO17 fatcat:srgo64qnurdwta7hvm6topemvu

Background Subtraction via Fast Robust Matrix Completion [article]

Behnaz Rezaei, Sarah Ostadabbas
2017 arXiv   pre-print
In modeling the background, we benefited from the in-face extended Frank-Wolfe algorithm for solving a defined convex optimization problem.  ...  In this regard, our paper addresses the problem of background modeling in a computationally efficient way, which is important for current eruption of "big data" processing coming from high resolution multi-channel  ...  Inexact augmented Lagrangian multiplier (IALM) is a successful effort in this regard for solving the problem of RPCA based on the augmented Lagrangian multiplier (ALM) algorithm without taking the unnecessary  ... 
arXiv:1711.01218v1 fatcat:fh4rdps2yze67frmzphdzs6xnu

Group sparse Lasso for cognitive network sensing robust to model uncertainties and outliers

Emiliano Dall'Anese, Juan Andrés Bazerque, Georgios B. Giannakis
2012 Physical Communication  
The sensing scheme is based on a parsimonious model that accounts for two forms of sparsity: one due to the narrowband nature of transmit-PSDs compared to the large portion of spectrum that a CR can sense  ...  To account for variations in the frequency, time, and space dimensions, dynamic re-use of licensed bands under the cognitive radio (CR) paradigm calls for innovative network-level sensing algorithms for  ...  A centralized algorithm for solving GS-Lasso problems is developed in Section 3, whereas perturbations in the channel (regression) matrices are considered in Section 4.  ... 
doi:10.1016/j.phycom.2011.07.005 fatcat:kjrgye7eirfobajjnt3bfj7bfa

Fast L1-Minimization Algorithms For Robust Face Recognition [article]

Allen Y. Yang, Zihan Zhou, Arvind Ganesh, S. Shankar Sastry, Yi Ma
2012 arXiv   pre-print
Although the underlying numerical problem is a linear program, traditional algorithms are known to suffer poor scalability for large-scale applications.  ...  We investigate a new solution based on a classical convex optimization framework, known as Augmented Lagrangian Methods (ALM).  ...  The accuracy for the remaining five algorithm is separated in three groups.  ... 
arXiv:1007.3753v4 fatcat:wawm3rqhdjfgndgm7uuvqhk6le

Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset

Thierry Bouwmans, Andrews Sobral, Sajid Javed, Soon Ki Jung, El-Hadi Zahzah
2017 Computer Science Review  
matrices for testing and ranking existing algorithms for background/foreground separation.  ...  Furthermore, we investigate if incremental algorithms and real-time implementations can be achieved for background/foreground separation.  ...  thank the following researchers: Zhouchen Lin (Visual Computing Group, Microsoft Research Asia) who has kindly provided the solver LADMAP [192] and the l 1 -filtering [196] , Shiqian Ma (Institute for  ... 
doi:10.1016/j.cosrev.2016.11.001 fatcat:vdh7ic4n6zfkjlccnyiq74z5wu

A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems

Xudong Li, Defeng Sun, Kim-Chuan Toh
2018 SIAM Journal on Optimization  
We develop a fast and robust algorithm for solving large scale convex composite optimization models with an emphasis on the 1 -regularized least squares regression (Lasso) problems.  ...  By leveraging on available error bound results to realize the asymptotic superlinear convergence property of the augmented Lagrangian algorithm, and by exploiting the second order sparsity of the problem  ...  Chao Ding at Chinese Academy of Sciences for numerous discussions on the error bound conditions and the metric subregularity.  ... 
doi:10.1137/16m1097572 fatcat:is4qdjpvpngh3mebh3oq5cm424

A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems [article]

Xudong Li, Defeng Sun, Kim-Chuan Toh
2017 arXiv   pre-print
We develop a fast and robust algorithm for solving large scale convex composite optimization models with an emphasis on the ℓ_1-regularized least squares regression (Lasso) problems.  ...  By leveraging on available error bound results to realize the asymptotic superlinear convergence property of the augmented Lagrangian algorithm, and by exploiting the second order sparsity of the problem  ...  Acknowledgments The authors would like to thank the anonymous referees for carefully reading our work and for their helpful suggestions. The authors would also like to thank Dr.  ... 
arXiv:1607.05428v3 fatcat:5qiersdtarcjpg5jznjse6piau

Masked-RPCA: Sparse and Low-rank Decomposition Under Overlaying Model and Application to Moving Object Detection [article]

Amirhossein Khalilian-Gourtani, Shervin Minaee, Yao Wang
2019 arXiv   pre-print
We propose the representation via masked decomposition (i.e. an overlaying model) where each element either belongs to the low-rank or the sparse component, decided by a mask.  ...  We propose the Masked-RPCA algorithm to recover the mask and the low-rank components simultaneously, utilizing linearizing and alternating direction techniques.  ...  Imposing the sparsity of desired W via 1 -norm we can formulate the problem as in (3) . minimize L,W L * +λ w W 1 subject to: (1 − W ) • (X − L) = 0 W ∈ [0, 1] mn×k (3) The algorithm for solving the problem  ... 
arXiv:1909.08049v1 fatcat:r4ctf4dpvzgihmqzkdj7ut36xu

Fused analytical and iterative reconstruction (AIR) via modified proximal forward–backward splitting: a FDK-based iterative image reconstruction example for CBCT

Hao Gao
2016 Physics in Medicine and Biology  
Intuitively since the eigenvalues of AR-projection operator are close to the unity, PFBS based AIR has a fast convergence.  ...  Specifically, AIR is established based on the modified proximal forward-backward splitting (PFBS) algorithm, and its connection to the filtered data fidelity with sparsity regularization is discussed.  ...  Yang-Kyun Park from Massachusetts General Hospital for projection data and inspiring discussions.  ... 
doi:10.1088/0031-9155/61/19/7187 pmid:27649259 fatcat:rg7bd4t46zeqdgi7mk4rukxbli
« Previous Showing results 1 — 15 out of 1,132 results