Deep Speaker Embedding with Long Short Term Centroid Learning for Text-Independent Speaker Verification

Junyi Peng, Rongzhi Gu, Yuexian Zou
2020 Interspeech 2020  
Recently, speaker verification systems using deep neural networks have shown their effectiveness on large scale datasets. The widely used pairwise loss functions only consider the discrimination within a mini-batch data (short-term), while either the speaker identity information or the whole training dataset is not fully exploited. Thus, these pairwise comparisons may suffer from the interferences and variances brought by speakerunrelated factors. To tackle this problem, we introduce the
more » ... identity information to form long-term speaker embedding centroids, which are determined by all the speakers in the training set. During the training process, each centroid dynamically accumulates the statistics of all samples belonging to a specific speaker. Since the long-term speaker embedding centroids are associated with a wide range of training samples, these centroids have the potential to be more robust and discriminative. Finally, these centroids are employed to construct a loss function, named long short term speaker loss (LSTSL). The proposed LSTSL constrains that the distances between samples and centroid from the same speaker are compact while those from different speakers are dispersed. Experiments are conducted on VoxCeleb1 and VoxCeleb2. Results on the VoxCeleb1 dataset demonstrate the effectiveness of our proposed LSTSL.
doi:10.21437/interspeech.2020-2470 dblp:conf/interspeech/PengGZ20 fatcat:s6sq6ix3zjbe7hhr2xsfcjt5fy