A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Detection of a Specific Musical Instrument Note Playing inPolyphonic Mixtures by Extreme Learning Machine andParticle Swarm Optimization
2012
International Journal of Information and Electronics Engineering
In this work, we present a system for detecting a specific musical instrument note in polyphonic mixtures based on machine learning method. ...
Furthermore, we show that our system can be used as a musical instrument source tracking system by creating a trajectory of its fundamental frequency using the information from the outputs of the system ...
This can be used for detecting and tracking the instrument signal even some polyphonic accompaniments exist in the background. ...
doi:10.7763/ijiee.2012.v2.198
fatcat:7bpislme7rczhhxykqumesgt6a
Genetic Algorithm Approach to Polyphonic Music Transcription
2007
2007 IEEE International Symposium on Intelligent Signal Processing
This task restricted the problem of Multiple F0 Estimation and Tracking to three cases: i) Estimate active fundamental frequencies on a frame-by-frame basis; ii) Track note contours on a continuous time ...
basis (as in audio-tomidi); iii) Track timbre on a continuous time basis. ...
and the maximum value of dynamics of other notes existing during the note duration. ...
doi:10.1109/wisp.2007.4447608
fatcat:wyiupwfi4rcdta3rdasvliv65e
Refined Spectral Template Models For Score Following
2013
Proceedings of the SMC Conferences
Acknowledgments This research is supported by the Austrian Science Fund (FWF) under project number Z159, and by the European Union Seventh Framework Programme FP7 (2007-2013) , through the project PHENICX ...
Systems for tracking monophonic instruments [3] , especially singing voice [4] [5] [6] [7] and finally polyphonic instruments [8] [9] [10] [11] [12] have emerged. ...
However, polyphonic scores in general no longer resemble linear sequences of notes. ...
doi:10.5281/zenodo.850366
fatcat:u3iuq7xltzbqjkq22jnfpkrncu
Dynamic Spectral Envelope Modeling for Timbre Analysis of Musical Instrument Sounds
2010
IEEE Transactions on Audio, Speech, and Language Processing
He also holds a degree in piano and music theory from the Madrid Conservatory of Music. ...
His research interests include audio content analysis, music information retrieval, source separation, and sound synthesis. ...
MATRIX FOR THE MAXIMUM ACCURACY
OBTAINED WITH LINEAR EI (D = 20)
TABLE IV POLYPHONIC IV INSTRUMENT RECOGNITION ACCURACY (%) ...
doi:10.1109/tasl.2009.2036300
fatcat:6p4y6txe5zh3xauchrwc4lylpe
A Survey on Query by Singing/Humming
2015
International Journal of Computer Applications
This paper is focused on providing a brief overview of query by singing/humming systems and methods are available in literature. ...
Query by humming systems will return a structured list of songs according to the similarity between humming and intent song according to the given melodies hummed by the users. ...
The second module is to extract features and build DB of the polyphonic music; this can be done by using the harmonic structure of vocal and musical instruments. ...
doi:10.5120/19608-1484
fatcat:amaokrlkabaard5jaik6y2pwym
Context-Aware Features for Singing Voice Detection in Polyphonic Music
[chapter]
2013
Lecture Notes in Computer Science
Timbral descriptors traditionally used to discriminate singing voice from accompanying instruments are complemented by new features representing the temporal dynamics of source pitch and timbre. ...
In the present work, observed differences in singing style and instrumentation across genres are used to adapt acoustic features for the singing voice detection task. ...
Conclusions In this paper we have investigated the use of a combination of static and dynamic features for effective detection of lead vocal segments within polyphonic music in a cross-cultural context ...
doi:10.1007/978-3-642-37425-8_4
fatcat:3c6yqg2amrdcdbmumgrn4czc4i
Automatic Singer Identification For Improvisational Styles Based On Vibrato, Timbre And Statistical Performance Descriptors
2014
Proceedings of the SMC Conferences
and Competitiveness (SIGMUS project, Subprograma de Proyectos de Investigacin Fundamental no Orientada, TIN2012-36650). ...
Acknowledgments This research has been partly funded by the Agencia de Gestió d'Ajuts Universitaris i de Recerca (AGAUR) / Generalitat de Catalunya / 2013-AAD-01874 and the Spanish Ministry of Economy ...
Parameters are empirically adjusted to: complexity c=15, tolerance t=0.001 and ε=10 15 , using feature set normalization and a linear kernel. ...
doi:10.5281/zenodo.850798
fatcat:a2hwj5oulnby7nxyio7a6c575i
EEG-Based Decoding of Auditory Attention to a Target Instrument in Polyphonic Music
2019
2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)
The task we consider here is quite complex as the stimuli used are polyphonic, including duets and trios, and are reproduced using loudspeakers instead of headphones. ...
In this work, we address the problem of EEG-based decoding of auditory attention to a target instrument in realistic polyphonic music. ...
activity tracks dynamic changes in the speech stimulus and can be successfully used to decode selective attention in a multispeaker environment. ...
doi:10.1109/waspaa.2019.8937219
dblp:conf/waspaa/CantisaniER19
fatcat:s7t5oqqnyffcxpcdz74fqdwxia
Learning a Latent Space of Multitrack Measures
[article]
2018
arXiv
pre-print
Discovering and exploring the underlying structure of multi-instrumental music using learning-based approaches remains an open problem. ...
We demonstrate that our latent space model makes it possible to intuitively control and generate musical sequences with rich instrumentation (see https://goo.gl/s2N7dV for generated audio). ...
A novel event-based track representation that handles polyphony, micro-timing, dynamics, and instrument selection. 3. ...
arXiv:1806.00195v1
fatcat:aayszaxv2fdipckez27o4qclse
Automatic Characterization of Flamenco Singing by Analyzing Audio Recordings
2013
Zenodo
Similarity among such performances is modeled by applying dynamic time-warping to align automatic transcriptions and extracting performance-related descriptors. ...
Flamenco singing is a highly expressive improvisational artform characterized by its deviation from the Western tonal system, freedom in rhythmic interpretation and a high amount of melodic ornamentation ...
SVM ( [Cristiani and Shawe-Taylor, 2000 ]) take advantage of using a non linear attribute mapping that allows them to be able to predict non-linear models (though they remain linear in a higher dimension ...
doi:10.5281/zenodo.3754293
fatcat:eu24vltd4vatlorfncpvblpgku
Automatic music transcription: challenges and future directions
2013
Journal of Intelligent Information Systems
One way to overcome the limited performance of transcription systems is to tailor algorithms to specific use-cases. ...
However, the performance of transcription systems is still significantly below that of a human expert, and accuracies reported in recent years seem to have reached a limit, although the field is still ...
pieces using RWC MIDI and RWC musical instrument samples to create the polyphonic mixtures used for the multiple-instrument transcription note tracking task Music Information Retrieval Evaluation eXchange ...
doi:10.1007/s10844-013-0258-3
fatcat:olfkl6jj7jhl3nxcy6td6qgdj4
Automatic Music Transcription and Audio Source Separation
2002
Cybernetics and systems
For polyphonic music transcription, with several notes at any time, other approaches can be used, such as a blackboard model or a multiple-cause/sparse coding method. ...
Here, the sound sources are one or more instruments playing a piece of music, and we wish to analyze this to identify the instruments that are playing, and when and for how long each note is played. ...
JPB and GM are supported by the OMRAS project, jointly funded by JISC (UK) and NSF (USA). ...
doi:10.1080/01969720290040777
fatcat:qgxhssx2lrertblwb3l2ixak4i
Signal Processing for Music Analysis
2011
IEEE Journal on Selected Topics in Signal Processing
We will examine how particular characteristics of music signals impact and determine these techniques, and we highlight a number of novel music analysis and retrieval tasks that such processing makes possible ...
Music signal processing may appear to be the junior relation of the large and mature field of speech signal processing, not least because many techniques and representations originally developed for speech ...
[128] who proposed a "note-estimation-free" instrument recognition system for polyphonic music. ...
doi:10.1109/jstsp.2011.2112333
fatcat:qvrgekkhzfdkljxn4xbrahg6hu
MIDI-VAE: Modeling Dynamics and Instrumentation of Music with Applications to Style Transfer
[article]
2018
arXiv
pre-print
We introduce MIDI-VAE, a neural network model based on Variational Autoencoders that is capable of handling polyphonic music with multiple instrument tracks, as well as modeling the dynamics of music by ...
incorporating note durations and velocities. ...
Our model is capable of producing harmonic polyphonic music with multiple instruments. It also learns the dynamics of music by incorporating note durations and velocities.
RELATED WORK Gatys et al. ...
arXiv:1809.07600v1
fatcat:r24jyiwgojfsbaitmruy4f7ifm
MIDI-VAE: Modeling Dynamics and Instrumentation of Music with Applications to Style Transfer
2018
Zenodo
We introduce MIDI-VAE, a neural network model based on Variational Autoencoders that is capable of handling polyphonic music with multiple instrument tracks, as well as modeling the dynamics of music by ...
incorporating note durations and velocities. ...
Our model is capable of producing harmonic polyphonic music with multiple instruments. It also learns the dynamics of music by incorporating note durations and velocities.
RELATED WORK Gatys et al. ...
doi:10.5281/zenodo.1492524
fatcat:difejmqzungl3bfwbgcvcgihpi
« Previous
Showing results 1 — 15 out of 1,736 results