Covariance, subspace, and intrinsic Crame/spl acute/r-Rao bounds

S.T. Smith
2005 IEEE Transactions on Signal Processing  
Cramér-Rao bounds on estimation accuracy are established for estimation problems on arbitrary manifolds in which no set of intrinsic coordinates exists. The frequently encountered examples of estimating either an unknown subspace or a covariance matrix are examined in detail. The set of subspaces, called the Grassmann manifold, and the set of covariance (positive-definite Hermitian) matrices have no fixed coordinate system associated with them and do not possess a vector space structure, both
more » ... which are required for deriving classical Cramér-Rao bounds. Intrinsic versions of the Cramér-Rao bound on manifolds utilizing an arbitrary affine connection with arbitrary geodesics are derived for both biased and unbiased estimators. In the example of covariance matrix estimation, closed-form expressions for both the intrinsic and flat bounds are derived and compared with the root-mean-square error (RMSE) of the sample covariance matrix (SCM) estimator for varying sample support K. The accuracy bound on unbiased covariance matrix estimators is shown to be about (10/ log 10)n/K 1/2 decibels, where n is the matrix order. Remarkably, it is shown that from an intrinsic perspective the SCM is a biased and inefficient estimator and that the bias term reveals the dependency of estimation accuracy on sample support observed in theory and practice. The RMSE of the standard method of estimating subspaces using the singular value decomposition (SVD) is compared with the intrinsic subspace Cramér-Rao bound derived in closed-form by varying both the signal-to-noise ratio (SNR) of the unknown pdimensional subspace and the sample support. In the simplest case, the Cramér-Rao bound on subspace estimation accuracy is shown to be about'p(n − p)´1 /2 K −1/2 SNR −1/2 radians for p-dimensional subspaces. It is seen that the SVD-based method yields accuracies very close to the Cramér-Rao bound, establishing that the principal invariant subspace of a random sample provides an excellent estimator of an unknown subspace. The analysis approach developed is directly applicable to many other estimation problems on manifolds encountered in signal processing and elsewhere, such as estimating rotation matrices in computer vision and estimating subspace basis vectors in blind source separation. Index Terms-Estimation bounds, sample covariance matrix, singular value decomposition, intrinsic geometry, estimator bias, Fisher information, efficiency, Riemannian manifold, reductive homogeneous space, natural invariant metric, positive definite matrices, Hermitian matrix, symmetric matrix, Grassmann manifold, differential geometry, exponential map, natural gradient, sectional and Riemannian curvature Steven Thomas Smith was born in La Jolla, CA in 1963. He received the B.A.Sc. degree in electrical engineering and mathematics from the University of British Columbia, Vancouver, BC in 1986 and the Ph.D. degree in applied mathematics from Harvard University, Cambridge, MA in 1993. From 1986 to 1988 he was a research engineer at ERIM, Ann Arbor, MI, where he developed morphological image processing algorithms. He is currently a senior member of the technical staff at MIT Lincoln Laboratory, which he joined in 1993. His research involves algorithms for adaptive signal processing, detection, and tracking to enhance radar and sonar systems performance. He has taught signal processing courses at Harvard and for the IEEE. His most recent work addresses intrinsic estimation and superresolution bounds, mean and variance CFAR, advanced tracking methods, and space-time adaptive processing algorithms. He was an associate editor of the IEEE Transactions on Signal Processing (2000)(2001)(2002), and received the SIAM outstanding paper award in 2001.
doi:10.1109/tsp.2005.845428 fatcat:pvf3v36auzb6vgsidipoy3bqna