A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
The Fast Convergence of Incremental PCA
2013
Neural Information Processing Systems
We consider a situation in which we see samples X n ∈ R d drawn i.i.d. from some distribution with mean zero and unknown covariance A. We wish to compute the top eigenvector of A in an incremental fashion -with an algorithm that maintains an estimate of the top eigenvector in O(d) space, and incrementally adjusts the estimate with each new data point that arrives. Two classical such schemes are due to Krasulina (1969) and Oja (1983) . We give finite-sample convergence rates for both.
dblp:conf/nips/BalsubramaniDF13
fatcat:pzvtlljqozd2rgim4s2itdgc6q