Filters








799 Hits in 6.0 sec

Timbre replacement of harmonic and drum components for music audio signals

Tomohiko Nakamura, Hirokazu Kameoka, Kazuyoshi Yoshii, Masataka Goto
2014 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
the timbres of drum sounds with those of another audio signal of polyphonic music (reference).  ...  This paper presents a system that allows users to customize an audio signal of polyphonic music (input), without using musical scores, by replacing the frequency characteristics of harmonic sounds and  ...  CONCLUSION We have described a system that can replace the drum timbres and frequency characteristics of harmonic components in polyphonic audio signals without using musical scores.  ... 
doi:10.1109/icassp.2014.6855052 dblp:conf/icassp/NakamuraKYG14 fatcat:fftzrjbiujernemlmoei3wmtvm

Drumix: An Audio Player with Real-time Drum-part Rearrangement Functions for Active Music Listening

Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno
2007 IPSJ Digital Courier  
This paper presents a highly functional audio player, called Drumix, that allows a listener to control the volume, timbre, and rhythmic patterns (drum patterns) of bass and snare drums within existing  ...  , a timbre change function that allows them to replace the original timbre of each drum with another selected from a drop-down list, and a drum-pattern editing function that enables them to edit repetitive  ...  To implement Drumix, it is necessary to automatically analyze musical contents of polyphonic audio signals, i.e., (1) estimate the frequency components of bass and snare drum sounds used in a target musical  ... 
doi:10.2197/ipsjdc.3.134 fatcat:nwashpturvcdrlszy5uhyab4xi

Applying Source Separation to Music [chapter]

Bryan Pardo, Antoine Liutkus, Zhiyao Duan, Gaël Richard
2018 Audio Source Separation and Speech Enhancement  
HPSS: Harmonic-Percussive Source Separation As an introductory example for non-parametric audio modeling, consider again the scenario where we want to separate the drums section of a music piece that was  ...  Taking advantage of the harmonic structure of music Harmonic sound sources (e.g., strings, woodwind, brass and vocals) are widely present in music signals and movie sound tracks.  ...  ., and Pardo, B. (2014a)  ... 
doi:10.1002/9781119279860.ch16 fatcat:vqsvomj4kzbttikrum4wgkfwoq

Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With Harmonic Structure Suppression

Kazuyoshi Yoshii, Masataka Goto, Hiroshi G. Okuno
2007 IEEE Transactions on Audio, Speech, and Language Processing  
This paper describes a system that detects onsets of the bass drum, snare drum, and hi-hat cymbals in polyphonic audio signals of popular songs.  ...  To make our system robust to the overlapping of harmonic sounds with drum sounds, the latter method suppresses harmonic components in the song spectrogram before the adaptation and matching.  ...  To suppress harmonic components in a musical audio signal, we sequentially perform three operations for each spectrogram segment: estimating F0 of harmonic structure, verifying harmonic components, and  ... 
doi:10.1109/tasl.2006.876754 fatcat:tszw5ugnsrcntblq6g5j6mvmzy

Instrument Equalizer For Query-By-Example Retrieval: Improving Sound Source Separation Based On Integrated Harmonic And Inharmonic Models

Katsutoshi Itoyama, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno
2008 Zenodo  
ACKNOWLEDGEMENTS This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research of Priority Areas, Primordial Knowledge Model Core  ...  of Global COE program and CrestMuse Project.  ...  ., boost or cut for bass and treble). Although remixing stereo audio signals [8] had reported previously, it had tackled to control only harmonic instrument sounds.  ... 
doi:10.5281/zenodo.1417407 fatcat:enbsgjk4vjeepaeehnaigxvrti

Signal Processing for Music Analysis

Meinard Muller, Daniel P. W. Ellis, Anssi Klapuri, Gaël Richard
2011 IEEE Journal on Selected Topics in Signal Processing  
Music signal processing may appear to be the junior relation of the large and mature field of speech signal processing, not least because many techniques and representations originally developed for speech  ...  Our goal is to demonstrate that, to be successful, music audio signal processing techniques must be informed by a deep and thorough insight into the nature of music itself.  ...  A total of 144 templates are then learned-128 for the background music and 16 for the drum component.  ... 
doi:10.1109/jstsp.2011.2112333 fatcat:qvrgekkhzfdkljxn4xbrahg6hu

Content-preserving reconstruction of Electronic Music Sessions using freely available musical building-blocks

Pablo Novillo Villegas
2015 Zenodo  
The system analyses every audio track of the audio session to extract features that will help preserving the melodic\tonal, rhythm and timbre content of the "Seed Song".  ...  We present a system for creating new versions of a given song ("Seed Song").  ...  will be replaced with different audio clips.  ... 
doi:10.5281/zenodo.3732241 fatcat:ymebshwxejccdmmw6spcvrch3y

Pattern Induction and Matching in Music Signals [chapter]

Anssi Klapuri
2011 Lecture Notes in Computer Science  
Methods are described for extracting such patterns from musical audio signals (pattern induction) and computationally feasible methods for retrieving similar patterns from a large database of songs (pattern  ...  This paper discusses techniques for pattern induction and matching in musical audio.  ...  Thanks to Christian Dittmar for the idea of using repeated patterns to improve the accuracy of source separation and analysis.  ... 
doi:10.1007/978-3-642-23126-1_13 fatcat:au4slplpzzbujb6xru2uzpg6tq

Low-Latency Instrument Separation in Polyphonic Audio Using Timbre Models [chapter]

Ricard Marxer, Jordi Janer, Jordi Bonada
2012 Lecture Notes in Computer Science  
For the evaluation we used a dataset of multi-track versions of professional audio recordings.  ...  It is based on time-frequency binary masks resulting from the combination of azimuth, phase difference and absolute frequency spectral bin classification and harmonic-derived masks.  ...  Fig. 1 . 1 The timbre features c are a variant of the Mel-Frequency Cepstrum Coefficients (MFCC), where the input spectrum is replaced by an interpolated harmonic spectral envelope e h (f ).  ... 
doi:10.1007/978-3-642-28551-6_39 fatcat:s7bgnh4vefgwpfikiermbv5nuu

Deep Embeddings and Section Fusion Improve Music Segmentation

Justin Salamon, Oriol Nieto, Nicholas J. Bryan
2021 Zenodo  
Music segmentation algorithms identify the structure of a music recording by automatically dividing it into sections and determining which sections repeat and when.  ...  Through a series of experiments we show that replacing handcrafted features with deep embeddings can lead to significant improvements in multi-level music segmentation performance, and that section fusion  ...  We would like to thank the authors of our baselines, Brian McFee, Daniel P.W. Ellis, and Christopher Tralie for making their code publicly available and reproducible.  ... 
doi:10.5281/zenodo.5624371 fatcat:hcrog7dehzbqjhr4ymmyueaoom

Four-way Classification of Tabla Strokes with Models Adapted from Automatic Drum Transcription

Rohit M A, Amitrajit Bhattacharjee, Preeti Rao
2021 Zenodo  
We start by exploring the use of transfer learning on a state-of-the-art pre-trained multiclass CNN drums model. This is compared with 1-way models trained separately for each tabla stroke class.  ...  We find that the 1-way models provide the best mean f-score while the drums pre-trained and tabla-adapted 3-way models generalize better for the most scarce target class.  ...  Our implementation involves first applying harmonic-percussive separation (HPS) [29] to the audio, which leaves all the attacks in the percussive component and resonant decay portions in the harmonic  ... 
doi:10.5281/zenodo.5624489 fatcat:c5kaz4tasvbhvmipvm466xvhgi

Jazz Solo Instrument Classification with Convolutional Neural Networks, Source Separation, and Transfer Learning

Juan S. Gómez, Jakob Abeßer, Estefanía Cano
2018 Zenodo  
In this paper, we build upon a recentlyproposed instrument recognition algorithm based on a hybrid deep neural network: a combination of convolutional and fully connected layers for learning characteristic  ...  We systematically evaluate harmonic/percussive and solo/accompaniment source separation algorithms as pre-processing steps to reduce the overlap among multiple instruments prior to the instrument recognition  ...  Mel-spectrograms of the original audio track, the harmonic/percussive components, and the solo/accompaniment components for a jazz excerpt of a saxophone solo played by John Coltrane.  ... 
doi:10.5281/zenodo.1492481 fatcat:d5yj7hqb6rhrjpbvwodi2xc62u

Rhythmic Concatenative Synthesis for Electronic Music: Techniques, Implementation, and Evaluation

Cárthach Ó Nuanáin, Perfecto Herrera, Sergi Jordá
2017 Computer Music Journal  
Our system, RhythmCAT, is proposed as a user-friendly system for generating rhythmic loops that model the timbre and rhythm of an initial target loop.  ...  In this article, we summarize recent research examining concatenative synthesis and its application and relevance in the composition and production of styles of electronic dance music.  ...  The value intuitively implies a discrimination between noisy or inharmonic signals and signals that are harmonic or more tonal.  ... 
doi:10.1162/comj_a_00412 fatcat:dwajoujvlrd2pdfmg726ajiyyy

Automatic Music Transcription as We Know it Today

Anssi P. Klapuri
2004 Journal of New Music Research  
The aim of this overview is to describe methods for the automatic transcription of Western polyphonic music.  ...  Only pitched musical instruments are considered: recognizing the sounds of drum instruments is not discussed.  ...  Goto & Mur-aoka Audio Meter Extract onset components; IOI histogram; 85 pieces; pop music with and (1995, 1997) multiple tracking agents without drums, 4/4 time signature Scheirer (1998) Audio  ... 
doi:10.1080/0929821042000317840 fatcat:fwg3vtwerfcv5g5f43win2zvvy

QuiKo: A Quantum Beat Generation Application [article]

Scott Oshiro
2022 arXiv   pre-print
It combines existing quantum algorithms with data encoding methods from quantum machine learning to build drum and audio sample patterns from a database of audio tracks.  ...  Measurements of the quantum circuit are then taken providing results in the form of probability distributions for external music applications to use to build the new drum patterns.  ...  We will gather a collection of audio samples (i.e. single drum hits and long melodic and harmonic patterns and progressions).  ... 
arXiv:2204.04370v2 fatcat:jd4ru3dbovbcvemm2mz3i3juqu
« Previous Showing results 1 — 15 out of 799 results