A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Dynamic Margin Softmax Loss for Speaker Verification
2020
Interspeech 2020
We propose a dynamic-margin softmax loss for the training of deep speaker embedding neural network. Our proposal is inspired by the additive-margin softmax (AM-Softmax) loss reported earlier. In AM-Softmax loss, a constant margin is used for all training samples. However, the angle between the feature vector and the ground-truth class center is rarely the same for all samples. Furthermore, the angle also changes during training. Thus, it is more reasonable to set a dynamic margin for each
doi:10.21437/interspeech.2020-1106
dblp:conf/interspeech/ZhouWLWLDW20
fatcat:iaqjbpklbjgnln7scxamd3ju7m