The Fast Convergence of Incremental PCA

Akshay Balsubramani, Sanjoy Dasgupta, Yoav Freund
2013 Neural Information Processing Systems  
We consider a situation in which we see samples X n ∈ R d drawn i.i.d. from some distribution with mean zero and unknown covariance A. We wish to compute the top eigenvector of A in an incremental fashion -with an algorithm that maintains an estimate of the top eigenvector in O(d) space, and incrementally adjusts the estimate with each new data point that arrives. Two classical such schemes are due to Krasulina (1969) and Oja (1983) . We give finite-sample convergence rates for both.
dblp:conf/nips/BalsubramaniDF13 fatcat:pzvtlljqozd2rgim4s2itdgc6q