A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="https://fjfsdata01prod.blob.core.windows.net/articles/files/623237/pubmed-zip/.versions/1/.package-entries/fpsyg-11-623237/fpsyg-11-623237.pdf?sv=2018-03-28&sr=b&sig=lS9iJqXc3F8JbNa7neAGNMbhNSaXc4M5p%2FGZvj0LCS8%3D&se=2021-04-28T19%3A42%3A46Z&sp=r&rscd=attachment%3B%20filename%2A%3DUTF-8%27%27fpsyg-11-623237.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Towards Computer-Based Automated Screening of Dementia Through Spontaneous Speech
<span title="2021-02-12">2021</span>
<i title="Frontiers Media SA">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/5r5ojcju2repjbmmjeu5oyawti" style="color: black;">Frontiers in Psychology</a>
</i>
Dementia, a prevalent disorder of the brain, has negative effects on individuals and society. This paper concerns using Spontaneous Speech (ADReSS) Challenge of Interspeech 2020 to classify Alzheimer's dementia. We used (1) VGGish, a deep, pretrained, Tensorflow model as an audio feature extractor, and Scikit-learn classifiers to detect signs of dementia in speech. Three classifiers (LinearSVM, Perceptron, 1NN) were 59.1% accurate, which was 3% above the best-performing baseline models trained
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fpsyg.2020.623237">doi:10.3389/fpsyg.2020.623237</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/33643116">pmid:33643116</a>
<a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC7907518/">pmcid:PMC7907518</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/msp5aw5vyzevhgod7veznmd56u">fatcat:msp5aw5vyzevhgod7veznmd56u</a>
</span>
more »
... n the acoustic features used in the challenge. We also proposed (2) DemCNN, a new PyTorch raw waveform-based convolutional neural network model that was 63.6% accurate, 7% more accurate then the best-performing baseline linear discriminant analysis model. We discovered that audio transfer learning with a pretrained VGGish feature extractor performs better than the baseline approach using automatically extracted acoustic features. Our DepCNN exhibits good generalization capabilities. Both methods presented in this paper offer progress toward new, innovative, and more effective computer-based screening of dementia through spontaneous speech.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210428194217/https://fjfsdata01prod.blob.core.windows.net/articles/files/623237/pubmed-zip/.versions/1/.package-entries/fpsyg-11-623237/fpsyg-11-623237.pdf?sv=2018-03-28&sr=b&sig=lS9iJqXc3F8JbNa7neAGNMbhNSaXc4M5p%2FGZvj0LCS8%3D&se=2021-04-28T19%3A42%3A46Z&sp=r&rscd=attachment%3B%20filename%2A%3DUTF-8%27%27fpsyg-11-623237.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/23/30/23304e74752b604a2b27d5772ee8be2520e506f9.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fpsyg.2020.623237">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
frontiersin.org
</button>
</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7907518" title="pubmed link">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
pubmed.gov
</button>
</a>