A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
On the Worst-Case Approximability of Sparse PCA
[article]
2015
arXiv
pre-print
It is well known that Sparse PCA (Sparse Principal Component Analysis) is NP-hard to solve exactly on worst-case instances. What is the complexity of solving Sparse PCA approximately? ...
Our contributions include: 1) a simple and efficient algorithm that achieves an n^-1/3-approximation; 2) NP-hardness of approximation to within (1-ε), for some small constant ε > 0; 3) SSE-hardness of ...
Yet, there are remarkably few worst-case approximability bounds, and many questions remain open. Does sparse PCA admit a nontrivial worst-case approximation ratio? ...
arXiv:1507.05950v1
fatcat:6z6nyjieczce3edrfnwlkjelem
Parameterized Complexity of PCA (Invited Talk)
2020
Scandinavian Workshop on Algorithm Theory
We discuss some recent progress in the study of Principal Component Analysis (PCA) from the perspective of Parameterized Complexity. ...
In the case when M is a "slightly" disturbed version of L, PCA performed on M provides a reasonable approximation for L. ...
One of the popular approaches to robust PCA, is to model outliers as additive sparse matrix. ...
doi:10.4230/lipics.swat.2020.1
dblp:conf/swat/FominGS20
fatcat:xpthkgfjwnhwvchic4phtdzvdm
Sparse Principal Component Analysis via Rotation and Truncation
[article]
2014
arXiv
pre-print
a rotation matrix and a sparse basis such that the sparse basis approximates the basis of PCA after the rotation. ...
Sparse principal component analysis (sparse PCA) aims at finding a sparse basis to improve the interpretability over the dense basis of PCA, meanwhile the sparse basis should cover the data subspace as ...
SPCArt aims to find a rotation matrix and a sparse basis such that the sparse basis approximates the loadings of PCA after the rotation. ...
arXiv:1403.1430v2
fatcat:4l7bfmd7s5eu5mfeprexqrw4pq
Abnormal Subspace Sparse PCA for Anomaly Detection and Interpretation
[article]
2016
arXiv
pre-print
The main shortage of principle component analysis (PCA) based anomaly detection models is their interpretability. ...
Our experiments on a synthetic dataset and two real world datasets showed that the proposed ASPCA models achieved comparable detection accuracies as the PCA model, and can provide interpretations for individual ...
Another recent work [11] proposed the joint sparse PCA (JSPCA) model to identify a low-dimensional approximation of the abnormal subspace, so that all anomalies can be localized onto a small subset of ...
arXiv:1605.04644v1
fatcat:f7b3mqwz4zfdrj2jkvpywog6ou
On Polyhedral and Second-Order Cone Decompositions of Semidefinite Optimization Problems
[article]
2019
arXiv
pre-print
We invoke the method to provide bound gaps of 0.5-6.5 sparse PCA problems with 1000s of covariates, and solve nuclear norm problems over 500x500 matrices. ...
By relating the method's rate of convergence to an initial outer approximation's diameter, we argue that the method performs well when initialized with a second-order-cone approximation, instead of a linear ...
Furthermore, we establish that the worst-case rate of convergence depends explicitly on the diameter of the initial feasible region. • In Section 4, we present numerical results demonstrating that the ...
arXiv:1910.03143v2
fatcat:a3yqtsj4brb47nq4bwaj4rarme
Optimal Sparse Linear Auto-Encoders and Sparse PCA
[article]
2015
arXiv
pre-print
Two natural questions in such a setting are: i) Given a level of sparsity, what is the best approximation to PCA that can be achieved? ...
We study the problem of constructing optimal sparse linear auto-encoders. ...
We thank Dimitris Papailiopoulos for pointing out the connection between MAX-CLIQUE and sparse PCA. ...
arXiv:1502.06626v1
fatcat:jzfgz5yab5dhxd3snkpd6xpiie
PCA with Gaussian perturbations
[article]
2015
arXiv
pre-print
of the on-line algorithm). ...
There is a core trade-off between the running time and the generalization performance, here measured by the regret of the on-line algorithm (total gain of the best off-line predictor minus the total gain ...
Acknowledgments Wojciech Kot lowski was supported by the Polish National Science Cente grant 2013/11/D/ST6/03050. ...
arXiv:1506.04855v2
fatcat:hkdumhsl5zfu5khxs5ifycp5pq
Analysis of PCA Algorithms in Distributed Environments
[article]
2015
arXiv
pre-print
We consider the worst-case scenarios for both metrics, and we identify the software libraries that implement each method. ...
Such algorithms were designed to work with small data that is assumed to fit in the memory of one machine. ...
The worst-case total size of the intermediate data is considered as the communication complexity. ...
arXiv:1503.05214v2
fatcat:5irvd6qdmvae5lsmi7eluqf5da
On-line PCA with Optimal Regrets
[article]
2014
arXiv
pre-print
This different behavior of EG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature ...
We carefully investigate the on-line version of PCA, where in each trial a learning algorithm plays a k-dimensional subspace, and suffers the compression loss on the next instance when projected into the ...
For T ≥ k and k ≤ n 2 , in the T trial online PCA problem with sparse instances, any online algorithm suffers worst case regret at least Ω( √ kT ). Proof. ...
arXiv:1306.3895v2
fatcat:iwazjggazvbt3makmien6fyj4q
Online PCA with Optimal Regrets
[chapter]
2013
Lecture Notes in Computer Science
We show that both algorithms are essentially optimal in the worst-case when the regret is expressed as a function of the number of trials. ...
This different behavior of MEG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature ...
For T ≥ k and k ≤ n 2 , in the T trial online PCA problem with sparse instances, any online algorithm suffers worst case regret at least Ω( √ kT ). Proof. ...
doi:10.1007/978-3-642-40935-6_8
fatcat:xewcxpzudfawhlh66stcpdulmq
Fast Pixel/Part Selection with Sparse Eigenvectors
2007
2007 IEEE 11th International Conference on Computer Vision
We extend the "Sparse LDA" algorithm of [7] with new sparsity bounds on 2-class separability and efficient partitioned matrix inverse techniques leading to 1000-fold speed-ups. ...
Our sparse models also show a better fit to data in terms of the "evidence" or marginal likelihood. ...
Acknowledgments Research conducted at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. ...
doi:10.1109/iccv.2007.4409093
dblp:conf/iccv/MoghaddamWA07
fatcat:kagyxjpyzzbzfosji3qjjrfoye
Sparse eigen methods by D.C. programming
2007
Proceedings of the 24th international conference on Machine learning - ICML '07
Using ℓ 1 -norm approximation to the cardinality constraint, previous methods have proposed both convex and non-convex solutions to the sparse PCA problem. ...
In this paper, we consider a cardinality constrained variational formulation of generalized eigenvalue problem with sparse principal component analysis (PCA) as a special case. ...
We wish to acknowledge support from the Fair Isaac Corporation and the University of California MICRO program. ...
doi:10.1145/1273496.1273601
dblp:conf/icml/SriperumbudurTL07
fatcat:ggjnftjdcbgmdpqgiygc6ykdni
Sparse PCA: Convex Relaxations, Algorithms and Applications
[article]
2010
arXiv
pre-print
Finally, we illustrate sparse PCA in several applications, ranging from senate voting and finance to news data. ...
Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this ...
Acknowledgments The authors gratefully acknowledge partial support from NSF grants SES-0835550 (CDI), CMMI-0844795 (CAREER), CMMI-0968842, a Peek junior faculty fellowship, a Howard B. ...
arXiv:1011.3781v2
fatcat:6gcol5kdg5g37ablprnjpyyspq
Scatterbrain: Unifying Sparse and Low-rank Attention Approximation
[article]
2021
arXiv
pre-print
On a pre-trained T2T Vision transformer, even without fine-tuning, Scatterbrain can reduce 98% of attention memory at the cost of only 1% drop in accuracy. ...
Inspired by the classical robust-PCA algorithm for sparse and low-rank decomposition, we propose Scatterbrain, a novel way to unify sparse (via locality sensitive hashing) and low-rank (via kernel feature ...
Acknowledgments We thank Xun Huang, Sarah Hooper, Albert Gu, Ananya Kumar, Sen Wu, Trenton Chang, Megan Leszczynski, and Karan Goel for their helpful discussions and feedback on early drafts of the paper ...
arXiv:2110.15343v1
fatcat:ycfcx3fujzebng2zl4vfk3xr5e
Matching pursuit-based compressive sensing in a wearable biomedical accelerometer fall diagnosis device
2017
Biomedical Signal Processing and Control
This article presents an evaluation of compressive sensing techniques in an accelerometer-based intelligent fall detection system modelled on a wearable Shimmer biomedical embedded computing device with ...
The presented fall detection system utilises a database of fall and activities of daily living signals evaluated with discrete wavelet transforms and principal component analysis to obtain binary tree ...
Applying the same percentage improvement obtained during worst case scenario provide an estimate of the compressive sensing fall detection techniques obtaining 69 hour operation on a single charge of a ...
doi:10.1016/j.bspc.2016.10.016
fatcat:mdu2zc33pngwff2zdbgwp4j3gu
« Previous
Showing results 1 — 15 out of 6,085 results