RePFormer: Refinement Pyramid Transformer for Robust Facial Landmark Detection [article]

Jinpeng Li, Haibo Jin, Shengcai Liao, Ling Shao, Pheng-Ann Heng
2022 arXiv   pre-print
This paper presents a Refinement Pyramid Transformer (RePFormer) for robust facial landmark detection. Most facial landmark detectors focus on learning representative image features. However, these CNN-based feature representations are not robust enough to handle complex real-world scenarios due to ignoring the internal structure of landmarks, as well as the relations between landmarks and context. In this work, we formulate the facial landmark detection task as refining landmark queries along
more » ... yramid memories. Specifically, a pyramid transformer head (PTH) is introduced to build both homologous relations among landmarks and heterologous relations between landmarks and cross-scale contexts. Besides, a dynamic landmark refinement (DLR) module is designed to decompose the landmark regression into an end-to-end refinement procedure, where the dynamically aggregated queries are transformed to residual coordinates predictions. Extensive experimental results on four facial landmark detection benchmarks and their various subsets demonstrate the superior performance and high robustness of our framework.
arXiv:2207.03917v1 fatcat:5tfghqbubvfn3dzfrbqzn4g6ce