A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Segmentation, Clustering, And Display In A Personal Audio Database For Musicians
2011
Zenodo
We wish to thank Bhiksha Raj for suggestions and comments on this work, and the Chinese Music Institute of Peking University for providing recordings of rehearsal for analysis. ...
CONCLUSIONS We have presented a system for automated management of a personal audio database for practicing musicians. ...
Section 3 describes how to organize the segments. Section 4 describes a two-way interface to the audio. Figure 1 . System diagram for a musician's personal audio database. ...
doi:10.5281/zenodo.1418184
fatcat:y7fik2pes5e2betuzjhz56zv7u
Omaxist Dialectics: Capturing, Visualizing And Expanding Improvisations
2012
Zenodo
OMax is an improvisation software based on a graph representation encoding the pattern repetitions and structures of a sequence, built incrementally and in real-time from a live Midi or Audio source. ...
A novel visualization is proposed, which displays the current state of the learnt knowledge and allows to notice, both on the fly and a posteriori, points of musical interest and higher level structures ...
In the case of a monophonic MIDI input , segmentation is trivial: a unit for a note. ...
doi:10.5281/zenodo.1178326
fatcat:4qsyrzcpdber7phaox25w3tlci
Dunya: A System To Browse Audio Music Collections Exploiting Cultural Context
2013
Zenodo
Koduri, and Sankalp Gulati for contributing to the descriptions of their respective research that is being used in the CompMusic project. ...
Pere Esteve assisted with the design and development of the graphical interface of the Dunya Browser. ...
Our audio processing is performed on a cluster of servers. Each server in the cluster has access to a shared NFS disk that contains the entire audio corpus. ...
doi:10.5281/zenodo.1417354
fatcat:vxiruwoqovcy7gmq66ynwpgd3m
Querying Improvised Music: Do You Sound Like Yourself?
2010
Zenodo
Viewed from this perspective, systems based on audio features for classifying and clustering musical tracks [10, 18] or segments [1] are Query-by-Example systems, just as are more modern implementations ...
matches); in principle if given a 5-second audio snippet as a query, we consider all similarly-sized segments in the database -up to some reasonable granularity -as potential matches. ...
doi:10.5281/zenodo.1417226
fatcat:lvnfbpgqkje55bh5skxw77ctz4
A Survey on Autonomous Techniques for Music Classification based on Human Emotions Recognition
2020
International Journal of Computing and Digital Systems
It helps the psychologists in treatment process of patients. It also helps the musicians and artists to work on specific type of music and to classify them. ...
In this article, the basic steps such as database collection, preprocessing, database analysis, feature extraction, classification and evaluation parameters involved in ATMC are explained and comprehensive ...
Emotion is the energy that brings a person in motion and music is the energy to induce emotions in humans. ...
doi:10.12785/ijcds/090308
fatcat:juy4jdgzyndhzaap7o4dzrwkti
Tunepal - Disseminating A Music Information Retrieval System To The Traditional Irish Music Community
2010
Zenodo
ACKNOWLEDGEMENTS We are grateful for the support the School of Computing at the Dublin Institute of Technology who fund this work. ...
Musicians playing traditional music have a personal repertoire of up to a thousand tunes. ...
Firstly, the use of digital audio formats and digital downloading of music has meant that personal music collections do not contain this biographic data and many musicians are unfamiliar with the history ...
doi:10.5281/zenodo.1416311
fatcat:z2oadm5wtne2dkshudmivjtnpa
A Survey on Visualizations for Musical Data
2020
Computer graphics forum (Print)
A Survey on Visualizations for Musical Data Khulusi, R.; Kusnick, J.; Meinecke, C.; Gillmann, C.; Focht, J.; Jänicke, S. ...
Further, we thank the persons who helped us in proofreading the survey and being available for discussions about the content of the paper. ...
Last, but not least, we are grateful to the 25 people who gave us permission for using pictures of their work in this survey. ...
doi:10.1111/cgf.13905
fatcat:nrkzgtb6ajfqnnuru37fflgm6q
Visualization of Deep Audio Embeddings for Music Exploration and Rediscovery
2022
Zenodo
In this paper, we present a novel web interface to visualize music collections using the audio embeddings extracted from music tracks. ...
We conduct a user study to analyze the effectiveness of different visualization strategies on the participants' personal music collections, particularly for playlist creation and music library navigation ...
Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skøodowska-Curie grant agreement No. 765068. ...
doi:10.5281/zenodo.6573533
fatcat:u7ilk6npjzc7vij5edad3366ie
Visualization of Deep Audio Embeddings for Music Exploration and Rediscovery
2022
Zenodo
In this paper, we present a novel web interface to visualize music collections using the audio embeddings extracted from music tracks. ...
We conduct a user study to analyze the effectiveness of different visualization strategies on the participants' personal music collections, particularly for playlist creation and music library navigation ...
Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skøodowska-Curie grant agreement No. 765068. ...
doi:10.5281/zenodo.6798268
fatcat:5mvu5ixve5eazocc3drcsy4t5e
A General Framework for Visualization of Sound Collections in Musical Interfaces
2021
Applied Sciences
The proposed framework allows for a modular combination of different techniques for sound segmentation, analysis, and dimensionality reduction, using the reduced feature space for interactive applications ...
While audio data play an increasingly central role in computer-based music production, interaction with large sound collections in most available music creation and production environments is very often ...
Acknowledgments: We would like to thank the reviewers for their time and thoughtful comments. Also, thanks to Claudine Levasseur for proofreading. ...
doi:10.3390/app112411926
fatcat:rxeyqnk7ujg75nphsz62wvnobq
Genre cataloging and instrument classification in large databases using music mining approach
2016
International Journal of Latest Trends in Engineering and Technology
A challenge in music information management is how to utilize information indexing systems in view of various aspects of the information itself. ...
In the pre-processing stage, fundamental acoustic qualities are separated and stored as original feature set. CMC is created with altered wavelet transformation (for processing acoustic signals). ...
The explore displayed in this paper is an attempt to design, implement, and evaluate a genre and instrument characterization framework for a music database. ...
doi:10.21172/1.72.577
fatcat:puklz62i4vcbrndomvrxhpeuc4
Audiovisual Analysis of Music Performances: Overview of an Emerging Field
2019
IEEE Signal Processing Magazine
Télécom ParisTech, where he is now a professor in audio signal processing and the head of the Image, Data, and Signal Department. ...
separation, machine-learning methods for audio/music signals, music information retrieval, and multimodal audio processing. ...
As highlighted in [31] , it is possible to learn an efficient audio feature representation for an audio-only application, specifically audio event recognition, by using a generic audiovisual database. ...
doi:10.1109/msp.2018.2875511
fatcat:fdrryzbojvgp7bkaqwmmun4zhu
Examining Emotion Perception Agreement in Live Music Performance
2021
IEEE Transactions on Affective Computing
We suggest that accounting for such listener-informed music features can benefit MER in helping to address variability in emotion perception by providing reasons for listener similarities and idiosyncrasies ...
First, in a live music concert setting, fifteen audience members annotated perceived emotion in valence-arousal space over time using a mobile application. ...
The authors would like to thank the reviewers and editors for their consideration and feedback, which have improved this manuscript. ...
doi:10.1109/taffc.2021.3093787
fatcat:6xjiz5coqjghvcef2gglxkmuli
Performance Following: Real-Time Prediction of Musical Sequences Without a Score
2012
IEEE Transactions on Audio, Speech, and Language Processing
This paper introduces a technique for predicting harmonic sequences in a musical performance for which no score is available, using real-time audio signals. ...
This allows the implementation of real-time performance following in live performance situations. We conduct an objective evaluation on a database of rock, pop and folk music. ...
ACKNOWLEDGMENT The authors would like to warmly thank the following individuals for contributing compositions for use in the evaluation: Andrew Robertson, Claire Robbin, Jamie Dale, Peter Greliak, Philip ...
doi:10.1109/tasl.2011.2159593
fatcat:3uloikpiyrbuzgeadgcrhhsxzm
Loopjam: Turning The Dance Floor Into A Collaborative Instrumental Map
2012
Zenodo
We presented and tested an early version of the in-stallation to three exhibitions in Belgium, Italy and France. The reactions among participants ranged between curiosity and amusement. ...
The sound map results from similarity-based clustering of sounds. The playback of these sounds is controlled by the positions or gestures of partic-ipants tracked with a Kinect depth-sensing camera. ...
We used a Microsoft Kinect depth sensing camera and the OpenNI library to segment users in a 3D scene. ...
doi:10.5281/zenodo.1178254
fatcat:sz2uwy4s5vae3fsv2rv4awksqm
« Previous
Showing results 1 — 15 out of 990 results