Vectorial approximations of infinite-dimensional covariance descriptors for image classification
Computational Visual Media
The class of symmetric positive definite (SPD) matrices, especially in the form of covariance descriptors (CovDs), have been receiving increased interest for many computer vision tasks. Covariance descriptors offer a compact way of robustly fusing different types of features with measurement variations. Successful examples of applying CovDs addressing various classification problems include object recognition, face recognition, human tracking, texture categorization, visual surveillance, etc.
... a novel data descriptor, CovDs encode the second-order statistics of features extracted from a finite number of observation points (e.g., the pixels of an image) and capture the relative correlation of these features along their powers as a means of representation. In general, CovDs are SPD matrices and it is well known that the space of SPD matrices (denoted by Sym + ) is not a subspace in Euclidean space but a Riemannian manifold with nonpositive curvature. As a consequence, conventional learning methods based on Euclidean geometry are not the optimal choice for CovDs, as proven in several prior studies. In order to better cope with the Riemannian structure of CovDs, many methods based on non-Euclidean metrics (e.g., affine-invariant metrics, log-Euclidean metrics, Bregman divergence, and Stein metrics) have been proposed over the last few years. In particular, the log-Euclidean metric possesses several desirable properties which are beneficial for classification: (i) it is fast to compute; (ii) it defines a true geodesic on Sym + ; and (iii) it comes up with