Folded-concave penalization approaches to tensor completion

Wenfei Cao, Yao Wang, Can Yang, Xiangyu Chang, Zhi Han, Zongben Xu
2015 Neurocomputing  
The existing studies involving matrix or tensor completion problems are commonly under the nuclear norm penalization framework due to the computational efficiency of the resulting convex optimization problem. Folded-concave penalization methods have demonstrated surprising developments in sparse learning problems due to their nice practical and theoretical properties. To share the same light of folded-concave penalization methods, we propose a new tensor completion model via folded-concave
more » ... ty for estimating missing values in tensor data. Two typical folded-concave penalties, the minmax concave plus (MCP) penalty and the smoothly clipped absolute deviation (SCAD) penalty, are employed in the new model. To solve the resulting nonconvex optimization problem, we develop a local linear approximation augmented Lagrange multiplier (LLA-ALM) algorithm which combines a two-step LLA strategy to search a local optimum of the proposed model efficiently. Finally, we provide numerical experiments with phase transitions, synthetic data sets, real image and video data sets to exhibit the superiority of the proposed model over the nuclear norm penalization method in terms of the accuracy and robustness. & 2014 Elsevier B.V. All rights reserved. of the matrix X is defined by ‖X‖ F ¼ ðΣ i;j jx i;j j 2 Þ 1=2 . And the nuclear norm is defined by ‖X‖ n ¼ Σ i σ i ðXÞ. We denote the inner product of the matrix space as 〈X; Y〉 ¼ Σ i;j X i;j Y i;j . An N-order tensor to be recovered is defined by X A R I 1 ÂI 2 Â⋯ÂIN , and its elements are denoted by x i 1 ;...;iN , where 1 r i k rI k , 1r k r N; and an observed N-order tensor is defined by T . The "unfold" operation along the k-th mode on a tensor X is defined by unfold k ðX Þ≔X ðkÞ A R I k ÂðI 1 ;...;I k À 1 I k þ 1 ⋯IN Þ , and the opposite operation "fold" is defined by fold k ðX ðkÞ Þ≕X . We also denote ‖X ‖ F ¼ ð∑ i 1 ;...iN jx i 1 ;...;iN j 2 Þ 1=2 as the Frobenius norm of a tensor X . Denote r i as the rank of X ðiÞ . For more details of tensor, see an elegant review [5] . Tensor completion via the global relationship approach assumes that the tensor X is sparse in the sense that each unfolding matrix X ðkÞ is low rank. Mathematically, tensor completion can be formulated as the following optimization problem:
doi:10.1016/j.neucom.2014.10.069 fatcat:3pqnsxbqfvbg5aiw3sbmpjcfjq