A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Sparse coding of auditory features for machine hearing in interference
2011
2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
A key problem in using the output of an auditory model as the input to a machine-learning system in a machine-hearing application is to find a good feature-extraction layer. For systems such as PAMIR (passive-aggressive model for image retrieval) that work well with a large sparse feature vector, a conversion from auditory images to sparse features is needed. For audio-file ranking and retrieval from text queries, based on stabilized auditory images, we took a multi-scale approach, using vector
doi:10.1109/icassp.2011.5947698
dblp:conf/icassp/LyonPC11
fatcat:tmzhx26olfh6lkgyych67yzwlu