800 Hits in 6.7 sec

Baselining Network-Wide Traffic by Time-Frequency Constrained Stable Principal Component Pursuit [article]

Kai Hu, Zhe Wang, Baolin Yin
2015 arXiv   pre-print
We develop a novel baseline scheme, named Stable Principal Component Pursuit with Time-Frequency Constraints (SPCP-TFC), which extends the Stable Principal Component Pursuit (SPCP) by applying new time-frequency  ...  Then we design an efficient numerical algorithm for SPCP-TFC.  ...  The Stable Principal Component Pursuit (SPCP) method for the generalized RPCA problem solves this convex program [14] : minimize A,E,N A * + λ E 1 s.t.  ... 
arXiv:1302.3422v4 fatcat:w4nfwdsd5vajferxv2pxsd4vkm

Value function approximation via low-rank models [article]

Hao Yi Ong
2015 arXiv   pre-print
Under minimal assumptions, this Robust PCA problem can be solved exactly via the Principal Component Pursuit convex optimization problem.  ...  The problem is to decompose a matrix that encodes the true value function into low-rank and sparse components, and we achieve this using Robust Principal Component Analysis (PCA).  ...  Principal Component Pursuit We obtain the value function components through the PCP estimate 3 minimize kLk C kS k 1 subject to L C S D M; (2) which can be solved by tractable convex optimization.  ... 
arXiv:1509.00061v1 fatcat:igdaxpkg3zdelllh4wohyzuxqe

Learning efficient sparse and low rank models [article]

Pablo Sprechmann, Alex M. Bronstein, Guillermo Sapiro
2012 arXiv   pre-print
We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms.  ...  In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit  ...  In one of its formulations, the robust principal component analysis can be pursued by solving the convex program min L,O∈R m×n 1 2 X − L − O 2 F + λ * L * + λ O 1 . (7) The same way the 1 norm is the convex  ... 
arXiv:1212.3631v1 fatcat:52bj43dhtbd4xckxsos47boxlu

Adapting Regularized Low Rank Models for Parallel Architectures [article]

Derek Driggs, Stephen Becker, Aleksandr Aravkin
2017 arXiv   pre-print
We develop a computable certificate of convergence for this non-convex program, and use it to establish bounds on the suboptimality of any point.  ...  Using Burer-Monteiro splitting and marginalization, we develop a smooth, non-convex formulation of regularized low-rank recovery models that can be fit with first-order solvers.  ...  Setting > 0 in (2) is more suitable for most applications, since it allows for noise in the measurements; the problem is then called stable principal component pursuit (SPCP) [45, 49] .  ... 
arXiv:1702.02241v2 fatcat:xtvtmkgoojgd5g57dpz2fy45mq

Iterative hard thresholding for model selection in genome-wide association studies

Kevin L. Keys, Gary K. Chen, Kenneth Lange
2017 Genetic Epidemiology  
Acknowledgements The authors are grateful to Aditya Ramdas for his guidance on IHT and to Dennis Sun for discussions about general model selection.  ...  Finally, we thank the Stanford University Statistics Department for hosting us as sabbatical guests during the 2014-2015 academic year.  ...  Other non-convex penalties exist, but MCP is probably the simplest to implement. MCP also has provable convergence guarantees.  ... 
doi:10.1002/gepi.22068 pmid:28875524 fatcat:6e6id62khve4dgadzqsqv2wny4

Sparse Generalized Eigenvalue Problem Via Smooth Optimization

Junxiao Song, Prabhu Babu, Daniel P. Palomar
2015 IEEE Transactions on Signal Processing  
In this paper, we consider an ℓ_0-norm penalized formulation of the generalized eigenvalue problem (GEP), aimed at extracting the leading sparse generalized eigenvector of a matrix pair.  ...  A preconditioned steepest ascent algorithm for finding the leading generalized eigenvector is provided.  ...  The simple thresholding scheme first computes the regular principal component and then keeps a required number of entries with largest absolute values.  ... 
doi:10.1109/tsp.2015.2394443 fatcat:35x6gxlvwjhjhg6s26jninbrje

Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset

Thierry Bouwmans, Andrews Sobral, Sajid Javed, Soon Ki Jung, El-Hadi Zahzah
2017 Computer Science Review  
The most representative problem formulation is the Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit (PCP) which decomposes a data matrix in a low-rank matrix and a sparse  ...  However, similar robust implicit or explicit decompositions can be made in the following problem formulations: Robust Non-negative Matrix Factorization (RNMF), Robust Matrix Completion (RMC), Robust Subspace  ...  thank the following researchers: Zhouchen Lin (Visual Computing Group, Microsoft Research Asia) who has kindly provided the solver LADMAP [192] and the l 1 -filtering [196] , Shiqian Ma (Institute for  ... 
doi:10.1016/j.cosrev.2016.11.001 fatcat:vdh7ic4n6zfkjlccnyiq74z5wu

Sparse Generalized Principal Component Analysis for Large-scale Applications beyond Gaussianity [article]

Qiaoya Zhang, Yiyuan She
2016 arXiv   pre-print
Principal Component Analysis (PCA) is a dimension reduction technique.  ...  Meanwhile we propose a family of iterative sparse generalized PCA (SG-PCA) algorithms such that despite the non-convexity and non-smoothness of the optimization task, the loss function decreases in every  ...  The principal components (PCs) {z 1 , · · · , z r } are then given by Z = XV and v i 's are called the principal loading vectors.  ... 
arXiv:1512.03883v2 fatcat:6zir6e23sjdzrlnr4qxus2lyo4

Sparse generalized principal component analysis for large-scale applications beyond Gaussianity

Qiaoya Zhang, Yiyuan She
2016 Statistics and its Interface  
Principal Component Analysis (PCA) is a dimension reduction technique.  ...  Meanwhile, we propose a family of iterative sparse generalized PCA (SG-PCA) algorithms such that despite the non-convexity and non-smoothness of the optimization task, the loss function decreases in every  ...  ACKNOWLEDGEMENT The authors would like to thank the editor and referee for their careful comments and useful suggestions that improve the quality of the paper.  ... 
doi:10.4310/sii.2016.v9.n4.a11 fatcat:uyd7gmxn6rd77dkodzzifygsqa

An inertial alternating direction method of multipliers for solving a two-block separable convex minimization problem [article]

Yang Yang, Yuchao Tang
2020 arXiv   pre-print
Furthermore, we apply the proposed algorithm on the robust principal component pursuit problem and also compare it with other state-of-the-art algorithms.  ...  In this paper, we propose an inertial ADMM for solving a two-block separable convex minimization problem with linear equality constraints.  ...  Robust principal component pursuit (RPCP) problem The robust principal component analysis (RPCA) problem was first introduced by Candès et al.  ... 
arXiv:2002.12670v3 fatcat:cvpnjmdsgrhc3k7b5hxrqzlwau

Efficient k-Support Matrix Pursuit [chapter]

Hanjiang Lai, Yan Pan, Canyi Lu, Yong Tang, Shuicheng Yan
2014 Lecture Notes in Computer Science  
In this paper, we study the k-support norm regularized matrix pursuit problem, which is regarded as the core formulation for several popular computer vision tasks.  ...  Second, we present an efficient procedure for k-support norm optimization, in which the computation of the key proximity operator is substantially accelerated by binary search.  ...  In the first experiment, we use the images in the first 5 classes, and then project the images onto 30 dimensions by principal component analysis (PCA).  ... 
doi:10.1007/978-3-319-10605-2_40 fatcat:xoekdj2k3bd2tl4bahrx6clhcm

Optimization with Sparsity-Inducing Penalties [article]

Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski
2011 arXiv   pre-print
We cover proximal methods, block-coordinate descent, reweighted ℓ_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set  ...  It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms.  ...  The idea is then simi- lar, iteratively linearizing for each group g the functions ζ around w g and minimizing the resulting convex surrogate (see an application to structured sparse principal component  ... 
arXiv:1108.0775v2 fatcat:ojhawgf3pnadfdiqk747gi25ry

Structured sparsity through convex optimization [article]

Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski
2012 arXiv   pre-print
We present applications to unsupervised learning, for structured sparse principal component analysis and hierarchical dictionary learning, and to supervised learning in the context of non-linear variable  ...  While naturally cast as a combinatorial optimization problem, variable or feature selection admits a convex relaxation through the regularization by the ℓ_1-norm.  ...  It is however not jointly convex in the pair (A, D), but alternating optimization schemes generally lead to good performance in practice.  ... 
arXiv:1109.2397v2 fatcat:wyszj4ba3bhixfnv7mfxxmzqty

Decentralized Sparsity-Regularized Rank Minimization: Algorithms and Applications

Morteza Mardani, Gonzalo Mateos, Georgios B. Giannakis
2013 IEEE Transactions on Signal Processing  
a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit.  ...  Interestingly, upon convergence the distributed (non-convex) estimator provably attains the global optimum of its centralized counterpart, regardless of initialization.  ...  is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit.  ... 
doi:10.1109/tsp.2013.2279080 fatcat:zmfe26ab5refnhf3ieff56qbuq

Load Curve Data Cleansing and Imputation Via Sparsity and Low Rank

Gonzalo Mateos, Georgios B. Giannakis
2013 IEEE Transactions on Smart Grid  
A robust estimator based on principal components pursuit (PCP) is adopted, which effects a twofold sparsity-promoting regularization through an ℓ_1-norm of the outliers, and the nuclear norm of the nominal  ...  In this context, a novel load cleansing and imputation scheme is developed leveraging the low intrinsic-dimensionality of spatiotemporal load profiles and the sparse nature of "bad data."  ...  Vladimir Cherkassky (Dept. of ECE, University of Minnesota) for providing the data analyzed in Section V-B.  ... 
doi:10.1109/tsg.2013.2259853 fatcat:hqxbjgu6jfcjvdvje5bqwrbjyy
« Previous Showing results 1 — 15 out of 800 results