A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Conditional-Computation-Based Recurrent Neural Networks for Computationally Efficient Acoustic Modelling
2018
Interspeech 2018
The first step in Automatic Speech Recognition (ASR) is a fixed-rate segmentation of the acoustic signal into overlapping windows of fixed length. Although this procedure allows to achieve excellent recognition accuracy, it is far from being computationally efficient, in that it may produce a highly redundant signal (i.e, almost identical spectral vectors may span many observation windows) that converts into computational overload. The reduction of such overload can be very beneficial for
doi:10.21437/interspeech.2018-2195
dblp:conf/interspeech/TavaroneB18
fatcat:5q3nbjg3anbsheh222yfc4pk4a