Exploring the Use of an Unsupervised Autoregressive Model as a Shared Encoder for Text-Dependent Speaker Verification
Vijay Ravi, Ruchao Fan, Amber Afshan, Huanhua Lu, Abeer Alwan
2020
Interspeech 2020
In this paper, we propose a novel way of addressing textdependent automatic speaker verification (TD-ASV) by using a shared-encoder with task-specific decoders. An autoregressive predictive coding (APC) encoder is pre-trained in an unsupervised manner using both out-of-domain (LibriSpeech, VoxCeleb) and in-domain (DeepMine) unlabeled datasets to learn generic, high-level feature representation that encapsulates speaker and phonetic content. Two task-specific decoders were trained using labeled
more »
... atasets to classify speakers (SID) and phrases (PID). Speaker embeddings extracted from the SID decoder were scored using a PLDA. SID and PID systems were fused at the score level. There is a 51.9% relative improvement in minDCF for our system compared to the fully supervised xvector baseline on the cross-lingual DeepMine dataset. However, the i-vector/HMM method outperformed the proposed APC encoder-decoder system. A fusion of the x-vector/PLDA baseline and the SID/PLDA scores prior to PID fusion further improved performance by 15% indicating complementarity of the proposed approach to the x-vector system. We show that the proposed approach can leverage from large, unlabeled, datarich domains, and learn speech patterns independent of downstream tasks. Such a system can provide competitive performance in domain-mismatched scenarios where test data is from data-scarce domains. Index Terms: speaker verification, unsupervised-learning, feature-representation, shared-encoder, domain-adaptation. Previously, the i-vector/PLDA (probabilistic linear discriminant analysis) method [5, 6] and some of its extensions [7, 8] showed promising results on the TD-ASV task. Zenali et al. introduced the HMM based i-vector approach [9, 10], and used a set of phone-specific HMMs to collect the statistics for i-vector extraction. In [11] , Variani et al. replaced the conventional ivectors by using deep neural networks (DNNs) to learn speaker discriminative features (d-vector). A phonetically-aware TD-ASV system was developed to extract i-vectors using: a) output posteriors [12] and b) bottleneck features [13], as frame alignments, which were generated from a DNN trained for automatic speech recognition (ASR).To tackle the shorter utterance problem, convolutional neural networks [14] and DNNs [15] were used to map the i-vectors extracted from short utterances to the corresponding long-utterance i-vectors. Although these systems
doi:10.21437/interspeech.2020-2957
dblp:conf/interspeech/RaviFALA20
fatcat:qbunf4hytrd6vpb3f4hiwhm6bi