A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2011; you can also visit the original URL.
The file type is application/pdf
.
Filters
Evaluating Low-Level Features for Beat Classification and Tracking
2007
2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07
In this paper, we address the question of which low-level acoustical features are the most adequate for identifying music beats computationally. ...
We compare two ways of evaluating features: their accuracy in a song-specific classification task (classifying beats vs nonbeats) and their performance as a front-end to a beat tracking system. ...
Acknowledgments This research was partly funded by the projects S2S 2 and In-terfaces2Music. Thanks to Anssi Klapuri, Stephen Hainsworth, Giorgos Emmanouil, Matthew Davies and Juan Bello. ...
doi:10.1109/icassp.2007.367318
dblp:conf/icassp/GouyonDW07
fatcat:ewklyj4dwjf2tk54fsxoirntue
Beat Critic: Beat Tracking Octave Error Identification By Metrical Profile Analysis
2010
Zenodo
Thanks are due to Geoffroy Peeters for provision of the beat-tracker and onset detection code. ...
EVALUATION Two evaluation strategies for octave errors are possible: 1) evaluation of beat tracking, where the phase of the beat tracking is correct, but the beat frequency is twice the true rate and 2 ...
CONCLUSIONS A method for the detection of octave errors in beat tracking has been proposed and evaluated. ...
doi:10.5281/zenodo.1417891
fatcat:euzbxjuc3be6dng2de6jlv36fu
Enhancing downbeat detection when facing different music styles
2014
2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
We estimate the time signature by examining the similarity of frames at the beat level. The features are selected through a linear SVM model or a weighted sum. ...
The whole system is evaluated on five different datasets of various musical styles and shows improvement over the state of the art. ...
CONCLUSION Evaluation results show that using complementary high level musically inspired features is efficient for downbeat detection when facing different music styles. ...
doi:10.1109/icassp.2014.6854177
dblp:conf/icassp/DurandDR14
fatcat:tym5inplcbbcffzn5kj4wwgbia
Audio Signal Representations for Indexing in the Transform Domain
2010
IEEE Transactions on Audio, Speech, and Language Processing
We show that this new audio codec allows efficient transform-domain audio indexing for 3 different applications, namely beat tracking, chord recognition and musical genre classification. ...
MDCT for AAC, or hybrid PQF/MDCT for MP3) have a sufficient time resolution for some rhythmic features, but a poor frequency resolution, which prevents their use in tonality-related applications. ...
ACKNOWLEDGMENT The authors would like to thank Matthew Davies from QMUL and Juan P. Bello from NYU for providing their Matlab code. ...
doi:10.1109/tasl.2009.2025099
fatcat:56n3nzdzjzaqjjfisnckmzrw3y
Capturing the Temporal Domain in Echonest Features for Improved Classification Effectiveness
[chapter]
2014
Lecture Notes in Computer Science
, and can be effectively used for large scale music genre classification. ...
We evaluate the performance on four traditional music genre classification test collections and compare them to state of the art audio descriptors. ...
From these low-level features some mid-and high-level audio descriptors are derived (e.g. tempo, key, time signature, etc.). ...
doi:10.1007/978-3-319-12093-5_13
fatcat:wycyqeczwvgdxm3vy22bxuh4au
Multi-Task Learning of Tempo and Beat: Learning One to Improve the Other
2019
Zenodo
In this paper, we propose a multi-task learning approach for simultaneous tempo estimation and beat tracking of musical audio. ...
The multi-task learning is achieved by globally aggregating the skip connections of a beat tracking system built around temporal convolutional networks, and feeding them into a tempo classification layer ...
for beat [4] and joint beat and downbeat tracking [6] . ...
doi:10.5281/zenodo.3527849
fatcat:w7a3sjlvord3zacq2w3qbqumrq
Using voice suppression algorithms to improve beat tracking in the presence of highly predominant vocals
2013
2013 IEEE International Conference on Acoustics, Speech and Signal Processing
Finally, we evaluate all the pairwise combinations between beat tracking and voice suppression methods. ...
Then, we use seven state-of-the-art audio voice suppression techniques and a simple low pass filter to improve beat tracking estimations in the later case. ...
and allow a better mid-level representation for beat tracking. ...
doi:10.1109/icassp.2013.6637607
dblp:conf/icassp/ZapataG13
fatcat:5mxeqofbhndwvpwwxum6ajx3mu
Content-Based Music Information Retrieval: Current Directions and Future Challenges
2008
Proceedings of the IEEE
Some of the music collections available are approaching the scale of ten million tracks and this has posed a major challenge for searching, retrieving, and organizing music content. ...
retrieval) and other cues (e.g., music notation and symbolic representation), and identifies some of the major challenges for the coming years. ...
the other low-level features. ...
doi:10.1109/jproc.2008.916370
fatcat:ynpw7lyf6fchfdzzl5jll22cee
Cover song detection: From high scores to general classification
2010
2010 IEEE International Conference on Acoustics, Speech and Signal Processing
The input to the system is a reference track and test track, and the output from the system is a binary classification of whether the reference/test pair is either from a reference/cover or reference/non-cover ...
The system differs from state-of-the-art detectors by calculating multiple input features, performing a novel type of test song normalization in order to combat against "impostor" tracks, and performing ...
We also experimented with mixing tempo levels (i.e. using 240 beats/minute for the reference track and 120 beats/minutes for the test track), but including these cross-tempos resulted in no performance ...
doi:10.1109/icassp.2010.5496214
dblp:conf/icassp/RavuriE10
fatcat:tyfmzodxmnfrripf5hapch4h64
Improving Rhythmic Similarity Computation By Beat Histogram Transformations
2009
Zenodo
An important pre-requisite for these search methods is the semantic classification, which requires suitable low-and mid-level features. ...
These techniques require specialized classifiers and the beat histogram cannot be used as feature in conjunction with other low-level features. ...
doi:10.5281/zenodo.1417635
fatcat:kpiuo7dzqvbxxhsefw35fdvx6e
Joint Beat And Downbeat Tracking With Recurrent Neural Networks
2016
Zenodo
We use the recently published beat and downbeat annotations for the GTZAN dataset, the Klapuri, and the SMC set (built specifically to comprise hard-to-track musical pieces) for evaluation. ...
Results & Discussion Since our system jointly tracks beats and downbeats, we compare with both downbeat and beat tracking algorithms. First of all, we evaluate on completely unseen data. ...
doi:10.5281/zenodo.1415836
fatcat:5u3avicz3bdtlgnw7umg7p55dm
Ibt: A Real-Time Tempo And Beat Tracking System
2010
Zenodo
SYSTEM DESCRIPTION
Audio Feature Extraction According to recent comparative studies evaluating alternative onset detection functions [5] and the accuracy of several low-level features applied to beat ...
Although this paper does not provide experiments with respect to the usefulness of diverse low-level features as input to tracking beats [9] [2], it should be noted that a particularity of the proposed ...
doi:10.5281/zenodo.1416469
fatcat:i4m3seayyjck3fiolidzvjbdku
Audio Features Dedicated to the Detection of Four Basic Emotions
[chapter]
2015
Lecture Notes in Computer Science
We examined the effect of low-level, rhythm and tonal features on the accuracy of the constructed classifiers. ...
We selected features and found sets of features that were the most useful for detecting individual emotions. ...
In the selected features, we have a representative of low-level (L), rhythm (R) and tonal (T) features. ...
doi:10.1007/978-3-319-24369-6_49
fatcat:rp7ozrfmhbdg7a3gg22xdhs3m4
In Search of Automatic Rhythm Analysis Methods for Turkish and Indian Art Music
2014
Journal of New Music Research
We define and describe three relevant rhythm annotation tasks for these cultures -beat tracking, meter estimation, and downbeat detection. ...
We then evaluate several methodologies from the state of the art in Music Information Retrieval (MIR) for these tasks, using manually annotated datasets of Turkish and Indian music. ...
We evaluate beat tracking on the Turkish low-level-annotated dataset and the Carnatic low-level-annotated dataset introduced in Section 3.2, using KLA and ELL beat tracking algorithms. ...
doi:10.1080/09298215.2013.879902
fatcat:ehr432lz7bdvdpnuywtk542pe4
Audio-Based Music Classification With A Pretrained Convolutional Network
2011
Zenodo
We then trained and evaluated the network as an MLP with backpropagation, for each of the classification tasks. ...
Network Layout The input of the network consists of beat-aligned chroma and timbre features for a given track, so there are 24 input dimensions in total. ...
doi:10.5281/zenodo.1415187
fatcat:txgwoju3lbfbpc6eanolqdqmmi
« Previous
Showing results 1 — 15 out of 21,942 results