Multi-Layer Progressive Face Alignment by Integrating Global Match and Local Refinement
Robust and accurate face alignment remains a challenging task, especially when local noises, illumination variations and partial occlusions exist in images. The existing local search and global match methods often misalign due to local optima without global constraints or limited local representation of global appearance. To solve these problems, we propose a new multi-layer progressive face alignment method that combines global matches for a whole face with local refinement for a given region,
... where the errors caused by local optima are restricted by globally-matched appearance, and the local misalignments in the global method are avoided by supplementing the representation of local details. Our method consists of the following processes: Firstly, an input image is encoded as a multi-mode Local Binary Pattern (LBP) image to regress the face shape parameters. Secondly, the local multi-mode histogram of oriented gradient (HOG) features is applied to update each landmark position. Thirdly, the above two alignment shapes are weighted as the final result. The contributions of this paper are as follows: (1) Shape initialization by applying an affine transformation to the mean shape. (2) Face representation by integrating multi-mode information in a whole face or a face region. (3) Face alignment by combining handcrafted features with convolutional neural networks (CNN). Extensive experiments on public datasets show that our method demonstrates improved performance in real environments in comparison to some state-of-the-art methods which apply single scale features or single CNN networks. Applying our method to the challenging HELEN dataset, the samples with fewer than 8 mean errors reach 81.1%.