Filters








1,054 Hits in 5.2 sec

Musical Instrument Recognition In Polyphonic Audio Using Source-Filter Model For Sound Separation

Toni Heittola, Anssi Klapuri, Tuomas Virtanen
2009 Zenodo  
CONCLUSIONS In this paper, we proposed a source-filter model for sound separation and used it as a preprocessing step for musical instrument recognition in polyphonic music.  ...  In this paper, we present a novel approach to sound separation by using source-filter model in the context of musical instrument recognition.  ... 
doi:10.5281/zenodo.1417376 fatcat:3thyzhcdobeethfnctkpoffnpu

Separation of Singing Voice from Music Background

Harshada Burute, P.B. Mane
2015 International Journal of Computer Applications  
Songs are representation of audio signal and musical instruments. An audio signal separation system should be able to identify different audio signals such as speech, background noise and music.  ...  An automatic singing voice separation system is used for attenuating or removing the music accompaniment.  ...  [3] "Adaptation of Bayesian models for single channel source separation and its application to voice/music separation in popular songs" introduce a general formalism for source model adaptation which  ... 
doi:10.5120/ijca2015906806 fatcat:bhx32kytazdqjcllmlt32pcraq

An Overview on Perceptually Motivated Audio Indexing and Classification

Gael Richard, Shiva Sundaram, Shrikanth Narayanan
2013 Proceedings of the IEEE  
In particular, we discuss several different strategies to integrate human perception including 1) the use of generic audition models, 2) the use of perceptually-relevant features for the analysis stage  ...  that are perceptually justified either as a component of a hearing model or as being correlated with a perceptual dimension of sound similarity, and 3) the involvement of the user in the audio indexing  ...  voice extraction, [78] , [79] for music instrument recognition, [31] for monophonic source detection in complex audio streams, [80] , [81] for Drum extraction or [82] for sound events detection  ... 
doi:10.1109/jproc.2013.2251591 fatcat:myywr5bztzeezi7mity4gwnpha

Attention-Based Predominant Instruments Recognition in Polyphonic Music

Lekshmi Reghunath, Rajeev Rajan
2021 Zenodo  
Predominant instrument recognition in polyphonic music is addressed using the score-level fusion of two visual representations, namely, Mel-spectrogram and modgdgram.  ...  We train the network using fixed-length singlelabeled audio excerpts and estimate the predominant instruments from variable-length audio recordings.  ...  Acknowledgments The first author would like to acknowledge the CERD of APJ Abdul Kalam Technological University, Trivandrum, Kerala, India for providing a Ph.D. fellowship.  ... 
doi:10.5281/zenodo.5043841 fatcat:ko7bji5dpjavre3vmwqsv2pqnu

Synergies between Musical Source Separation and Instrument Recognition

Juan José Bosch, Jordi Janer
2011 Zenodo  
In the second task, source separation is used to divide the polyphonic audio signal into several streams, given as input to the instrument recognition models.  ...  In the first task, instrument recognition is used to detect the presence of the target instrument in order to apply or bypass the separation algorithms.  ...  In order to deal with this shortcoming, Virtanen and Klapuri [44] proposed the use of a source filter model with NMF for the analysis of polyphonic audio.  ... 
doi:10.5281/zenodo.3753890 fatcat:sbmvvntdwnhp7czpnbpa3fa3p4

Signal Processing for Music Analysis

Meinard Muller, Daniel P. W. Ellis, Anssi Klapuri, Gaël Richard
2011 IEEE Journal on Selected Topics in Signal Processing  
Our goal is to demonstrate that, to be successful, music audio signal processing techniques must be informed by a deep and thorough insight into the nature of music itself.  ...  Music signal processing may appear to be the junior relation of the large and mature field of speech signal processing, not least because many techniques and representations originally developed for speech  ...  Instrument Recognition in Polyphonic Mixtures Instrument recognition in polyphonic music is closely related to sound source separation: recognizing instruments in a mix-ture allows one to generate time-frequency  ... 
doi:10.1109/jstsp.2011.2112333 fatcat:qvrgekkhzfdkljxn4xbrahg6hu

Jazz Solo Instrument Classification with Convolutional Neural Networks, Source Separation, and Transfer Learning

Juan S. Gómez, Jakob Abeßer, Estefanía Cano
2018 Zenodo  
For the particular use-case of solo instrument recognition in jazz ensemble recordings, we further apply transfer learning techniques to fine-tune a previously trained instrument recognition model for  ...  Our results indicate that both source separation as pre-processing step as well as transfer learning clearly improve recognition performance, especially for smaller subsets of highly similar instruments  ...  DATA SETS IRMAS The IRMAS data set (Instrument Recognition in Music Audio Signals) for predominant instrument recognition was first introduced by Bosch et al. in [2] .  ... 
doi:10.5281/zenodo.1492481 fatcat:d5yj7hqb6rhrjpbvwodi2xc62u

Introduction to the Special Issue on Music Signal Processing

Meinard Muller, Daniel P. W. Ellis, Anssi Klapuri, Gaël Richard, Shigeki Sagayama
2011 IEEE Journal on Selected Topics in Signal Processing  
Carabias et al. explicitly represent an instrument's timbral characteristics using a source-filter model, where parameters are tied across different pitches.  ...  His Laboratory for Recognition and Organization of Speech and Audio (LabROSA) is concerned with all aspects of extracting high-level information from audio, including speech recognition, music description  ... 
doi:10.1109/jstsp.2011.2165109 fatcat:5cuafjwprnflhgdne332hdyevy

Deep Convolutional Neural Networks for Predominant Instrument Recognition in Polyphonic Music

Yoonchang Han, Jaehun Kim, Kyogu Lee
2017 IEEE/ACM Transactions on Audio Speech and Language Processing  
In this paper, we present a convolutional neural network framework for predominant instrument recognition in real-world polyphonic music.  ...  Using a dataset of 10k audio excerpts from 11 instruments for evaluation, we found that convolutional neural networks are more robust than conventional methods that exploit spectral features and source  ...  His research focuses on signal processing and machine learning techniques applied to music and audio. Lee received a PhD in computerbased music theory and acoustics from Stanford University.  ... 
doi:10.1109/taslp.2016.2632307 fatcat:ykvi5j4cxzbybjjy6acm4oo5du

Training Deep Convolutional Networks With Unlimited Synthesis Of Musical Examples For Multiple Instrument Recognition

Rameel Sethi, Noah Weninger, Abram Hindle, Vadim Bulitko, Michael Frishkopf
2018 Proceedings of the SMC Conferences  
Some equipment (GPUs) used in this research were donated by the NVIDIA Corporation. Vadim Bulitko and Abram Hindle are supported by NSERC Discovery Grants.  ...  Polyphonic instrument recognition is an instance of the more general problem of sound-source separation, where the task is to separate individual sources of audio from a given mixture.  ...  More recently, convolutional neural networks have been used for instrument recognition in polyphonic music [10] .  ... 
doi:10.5281/zenodo.1422586 fatcat:fbucn2tetzemlbfnkvjiw6rwk4

Instrument Activity Detection in Polyphonic Music using Deep Neural Networks

Siddharth Gururani, Cameron Summers, Alexander Lerch
2018 Zenodo  
Although instrument recognition has been thoroughly research, recognition in polyphonic music still faces challenges.  ...  While most research in polyphonic instrument recognition focuses on predicting the predominant instruments in a given audio recording, instrument activity detection represents a generalized problem of  ...  We thank them for their generous support. We would also like to thank Nvidia for supporting us with a Titan Xp awarded as part of the GPU grant program.  ... 
doi:10.5281/zenodo.1492479 fatcat:slictqg3xjeydoziwdzc6vqvky

A novel cepstral representation for timbre modeling of sound sources in polyphonic mixtures

Zhiyao Duan, Bryan Pardo, Laurent Daudet
2014 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
We derive the mathematical relations between these cepstral representations, and compare their timbre modeling performances in the task of instrument recognition in polyphonic audio mixtures.  ...  We propose a novel cepstral representation called the uniform discrete cepstrum (UDC) to represent the timbre of sound sources in a sound mixture.  ...  A good timbre representation would be useful in speaker identification and instrument recognition. It would also be useful for sound source tracking and separation.  ... 
doi:10.1109/icassp.2014.6855057 dblp:conf/icassp/DuanPD14 fatcat:k32cf5wkcjephkynju6vjecirm

Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With Harmonic Structure Suppression

Kazuyoshi Yoshii, Masataka Goto, Hiroshi G. Okuno
2007 IEEE Transactions on Audio, Speech, and Language Processing  
Index Terms-Drum sound recognition, harmonic structure suppression, polyphonic audio signal, spectrogram template, template adaptation, template matching.  ...  This paper describes a system that detects onsets of the bass drum, snare drum, and hi-hat cymbals in polyphonic audio signals of popular songs.  ...  drum sound used in the polyphonic audio signal of a target musical piece.  ... 
doi:10.1109/tasl.2006.876754 fatcat:tszw5ugnsrcntblq6g5j6mvmzy

Automatic Music Transcription and Audio Source Separation

M. D. Plumbley, S. A. Abdallah, J. P. Bello, M. E. Davies, G. Monti, M. B. Sandler
2002 Cybernetics and systems  
For polyphonic music transcription, with several notes at any time, other approaches can be used, such as a blackboard model or a multiple-cause/sparse coding method.  ...  In particular, we consider the problems of automatic music transcription and audio source separation, which are of particular interest to our group.  ...  Discussion Analysis and separation of musical audio is still in relative infancy at present, compared with e.g. automatic speech recognition.  ... 
doi:10.1080/01969720290040777 fatcat:qgxhssx2lrertblwb3l2ixak4i

Transformer-based ensemble method for multiple predominant instruments recognition in polyphonic music

Lekshmi Chandrika Reghunath, Rajeev Rajan
2022 EURASIP Journal on Audio, Speech, and Music Processing  
AbstractMultiple predominant instrument recognition in polyphonic music is addressed using decision level fusion of three transformer-based architectures on an ensemble of visual representations.  ...  The architectural choice of transformers with ensemble voting on Mel-spectro-/modgd-/tempogram has merit in recognizing the predominant instruments in polyphonic music.  ...  Bosch, Ferdinand Fuhrmann, and Perfecto Herrera ( Music Technology Group -Universitat Pompeu Fabra) for developing the IRMAS dataset and making it publicly available.  ... 
doi:10.1186/s13636-022-00245-8 fatcat:nvlrbz2y3new7mwquafljfxyie
« Previous Showing results 1 — 15 out of 1,054 results