End-to-end Sound Source Separation Conditioned on Instrument Labels

Olga Slizovskaia, Leo Kim, Gloria Haro, Emilia Gomez
2019 ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
Can we perform an end-to-end sound source separation (SSS) with a variable number of sources using a deep learning model? This paper presents an extension of the Wave-U-Net [1] model which allows end-to-end monaural source separation with a non-fixed number of sources. Furthermore, we propose multiplicative conditioning with instrument labels at the bottleneck of the Wave-U-Net and show its effect on the separation results. This approach can be further extended to other types of conditioning such as audio-visual SSS and score-informed SSS.
doi:10.1109/icassp.2019.8683800 dblp:conf/icassp/SlizovskaiaKHG19 fatcat:ampbcvbyt5hvdm64bugonz2g3i