Semi-supervised subspace learning with L2graph

Xi Peng, Miaolong Yuan, Zhiding Yu, Wei Yun Yau, Lei Zhang
2016 Neurocomputing  
Subspace learning aims to learn a projection matrix from a given training set so that a transformation of raw data to a low-dimensional representation can be obtained. In practice, the labels of some training samples are available, which can be used to improve the discrimination of low-dimensional representation. In this paper, we propose a semi-supervised learning method which is inspired by the biological observation of similar inputs having similar codes (SISC), i.e., the same collection of
more » ... ortical columns of the mammal's visual cortex are always activated by the similar stimuli. More specifically, we propose a mathematical formulation of SISC which minimizes the distance among the data points with the same label while maximizing the separability between different subjects in the projection space. The proposed method, namely, semi-supervised L2graph (SeL2graph) has two advantages: 1) Unlike the classical dimension reduction methods such as principle component analysis, SeL2graph can automatically determine the dimension of feature space. This remarkably reduces the effort to find an optimal feature dimension for a good performance; and 2) It fully exploits the prior knowledge carried by the labeled samples and thus the obtained features are with higher discrimination and compactness. Extensive experiments show that the proposed method outperforms 7 subspace learning algorithms on 15 data sets with respect to classification accuracy, computational efficiency, and robustness to * Corresponding author
doi:10.1016/j.neucom.2015.11.112 fatcat:ndxrci6a5bhn7jzjv4pjdcy7ma