A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
Audio-Based Deep Learning Frameworks for Detecting COVID-19
This paper evaluates a wide range of audio-based deep learning frameworks applied to the breathing, cough, and speech sounds for detecting COVID-19. In general, the audio recording inputs are transformed into low-level spectrogram features, then they are fed into pre-trained deep learning models to extract high-level embedding features. Next, the dimension of these high-level embedding features are reduced before fine-tuning using Light Gradient Boosting Machine (LightGBM) as a back-enddoi:10.31219/osf.io/w4prf fatcat:t7mw32jaibe47jxpreryvdlrcy