A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Cochlea-based Features for Music Emotion Classification
2017
Proceedings of the 14th International Joint Conference on e-Business and Telecommunications
Cochlea-based Features for Music Emotion Classification. ...
Based on the idea that humans don't detect emotions from pure audio signal but from a signal that had been previously processed by the cochlea, in this work we propose new cochlear based features for music ...
Based on the idea that humans don't detect emotions from pure audio signal but from signal that had been previously processed by the cochlea, in this work we proposed a new feature set for music emotion ...
doi:10.5220/0006466900640068
dblp:conf/sigmap/KraljevicRMS17
fatcat:ihhpvcoylzco5k7rnbv2xb3vgm
An Overview on Perceptually Motivated Audio Indexing and Classification
2013
Proceedings of the IEEE
In particular, we discuss several different strategies to integrate human perception including 1) the use of generic audition models, 2) the use of perceptually-relevant features for the analysis stage ...
Since the resulting audio classification and indexing is meant for direct human consumption, it is highly desirable that it produces perceptually relevant results. ...
Nevertheless, numerous features have been proposed for analysing polyphonic music, especially for music genre or music emotion analysis (see for example [61] - [63] ). ...
doi:10.1109/jproc.2013.2251591
fatcat:myywr5bztzeezi7mity4gwnpha
An Empathy Evaluation System Using Spectrogram Image Features of Audio
2021
Sensors
The purpose of this paper is to analyze the music features in an advertising video and extract the music features that make people empathize. ...
The music in videos has a sensitive influence on human emotions, perception, and imaginations, which can make people feel relaxed or sad, and so on. ...
Based on the frequency detected by the cochlea humans perceive audio, hence this frequency is used as a feature. However, Cochleas have special properties. ...
doi:10.3390/s21217111
pmid:34770419
pmcid:PMC8587789
fatcat:svopmxuw5vhw3fjgv6bvffiw6e
Music perception with current signal processing strategies for cochlear implants
2011
Proceedings of the 4th International Symposium on Applied Sciences in Biomedical and Communication Technologies - ISABEL '11
Music can modulate emotions and stimulate the brain in different ways than speech, for this reason, music can impact in quality of life for cochlear implant users. ...
Finally, a review of music evaluation methods will be presented. ...
Electrodes located towards the base of the cochlea produce higher pitch sensations than electrodes located towards the apex of the cochlea. ...
doi:10.1145/2093698.2093881
dblp:conf/isabel/NogueiraHHS11
fatcat:5qbkbd3yi5bwdnvh3vtwhyzpnq
A Supramodal Vibrissa Tactile and Auditory Model for Texture Recognition
[chapter]
2010
Lecture Notes in Computer Science
Two gammatone based resonant filterbanks are used for cochlea and whiskers array modeling. ...
Each filterbank is then linked to a feature extraction algorithm, inspired by data recorded in the rats barrel cortex, and finally to a multilayer perceptron. ...
Acknowledgment This work as been funded by the EC Integrated Project ICEA (Integrating Cognition, Emotion and Autonomy), IST-027819-IP. ...
doi:10.1007/978-3-642-15193-4_18
fatcat:d7nwpxig2ffpdijkfqfrqmye3m
A transfer learning framework for predicting the emotional content of generalized sound events
2017
Journal of the Acoustical Society of America
To this end the following are proposed: (a) the usage of temporal modulation features, (b) a transfer learning module based on an echo state network, and (c) a k-medoids clustering algorithm predicting ...
The effectiveness of the proposed solution is demonstrated after a thoroughly designed experimental phase employing both sound and music data. ...
The joint emotional space is formed by general sounds and music signals for improving the prediction of emotions evoked by sound events.FIG. 2. ...
doi:10.1121/1.4977749
pmid:28372068
fatcat:vwwyutbidngvtmr4mf4zzuqvha
Music Feature Extraction and Classification Algorithm Based on Deep Learning
2021
Scientific Programming
and classification model based on convolutional neural network, which can extract more relevant sound spectrum characteristics of the music category. ...
Traditional music classification approaches use a large number of artificially designed acoustic features. The design of features requires knowledge and in-depth understanding in the domain of music. ...
At present, music classification mainly includes text classification and classification based on music content. ...
doi:10.1155/2021/1651560
fatcat:7v6euhjzhjdexfi5w4jhkgmpiq
Playing Music for a Smarter Ear: Cognitive, Perceptual and Neurobiological Evidence
2011
Music Perception
to academic progress, emotional health, and vocational success. ...
In this review we argue not only that common neural mechanisms for speech and music exist, but that experience in music leads to enhancements in sensory and cognitive contributors to speech processing. ...
How does musical expertise shape speech perception? Visual evidence from Auditory Classification Images. ...
doi:10.1525/mp.2011.29.2.133
pmid:22993456
pmcid:PMC3444167
fatcat:kvtzpoejbvcfxh7bnkjjlxza44
Music Composition and Emotion Recognition Using Big Data Technology and Neural Network Algorithm
2021
Computational Intelligence and Neuroscience
Finally, the emotion feature recognition and extraction of music composition content are realized. ...
To implement a mature music composition model for Chinese users, this paper analyzes the music composition and emotion recognition of composition content through big data technology and Neural Network ...
Figure 5 :Figure 6 :Figure 7 : 567 Figure 5: Emotion features of music.
Figure 10 shows the validity test of music emotion feature extraction and classification algorithms. ...
doi:10.1155/2021/5398922
pmid:34956348
pmcid:PMC8702338
fatcat:yy5cta22xzhtbmfmkqpuelmcbq
Seven problems that keep MIR from attracting the interest of cognition and neuroscience
2013
Journal of Intelligent Information Systems
Acknowledgements We wish to credit Gert Lanckriet (UCSD), Juan Bello (NYU) and Geoffroy Peeters (IRCAM) for an animated discussion at ISMIR 2012 leading to the idea of a moratorium on all non-essential ...
classification cannot be based on these features. ...
The argument was based on a large-scale study of more than 800 musical categories, each documented for a dataset of 10,000 songs, which we subjected to MIR classification with hundreds of signal processing ...
doi:10.1007/s10844-013-0251-x
fatcat:mobxjallj5hh3lxza5wb7bow2i
From Music to Emotions and Tinnitus Treatment, Initial Study
[chapter]
2012
Lecture Notes in Computer Science
The patient visits are separated and used for mining and action rule discovery based on all features and treatment success indicators including several new features tied to emotions (based on a mapping ...
from TFI to Emotion Indexing Questionnaire (EIQ) [14] ; EIQ questionnaire is used by our team to build personalized classifiers for automatic indexing of music by emotions). ...
This way, features related to emotions are used to build emotion-type bridge between tinnitus and music. ...
doi:10.1007/978-3-642-34624-8_29
fatcat:xnf7rfldlrcjtm2cw7pfftqvpi
Biologically inspired emotion recognition from speech
2011
EURASIP Journal on Advances in Signal Processing
Emotion recognition has become a fundamental task in human-computer interaction systems. In this article, we propose an emotion recognition approach based on biologically inspired methods. ...
Specifically, emotion classification is performed using a long short-term memory (LSTM) recurrent neural network which is able to recognize long-range dependencies between successive temporal patterns. ...
In order to perform recognition of speech emotion, two issues are of fundamental importance: the role of speech features on the classification performance, and the classification system employed for recognition ...
doi:10.1186/1687-6180-2011-24
fatcat:6s3io7drmrfr5gchwaptjzbldy
Automatic Classification of Cat Vocalizations Emitted in Different Contexts
2019
Animals
., mel-frequency cepstral coefficients andtemporal modulation features. Subsequently, these are modeled using a classification scheme based ona directed acyclic graph dividing the problem space. ...
Cats employ vocalizations for communicating information, thus their sounds can carry a widerange of meanings. ...
Acknowledgments: We would like to thank Elisabetta Canali, Marco Colombo, Stanislaw Jaworski, and Eugenio Heinzl for their useful comments in the initial setup of the experiment. ...
doi:10.3390/ani9080543
pmid:31405018
pmcid:PMC6719916
fatcat:zwyhdhi46vdf3ojemrzv5xhenu
Music-induced emotions can be predicted from a combination of brain activity and acoustic features
2015
Brain and Cognition
Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies ...
than either feature type alone (p < 0:01). ...
It has been used as a feature in a range of problems, for example in music genre classification (Martin Mckinney, 2003) . ...
doi:10.1016/j.bandc.2015.08.003
pmid:26544602
fatcat:i4qme24zvrha3iuotwtp5bqso4
A Cortically-Inspired Model for Bioacoustics Recognition
[chapter]
2015
Lecture Notes in Computer Science
We demonstrate the improved performance of wavelets for feature detection and the potential viability of using HTM for bioacoustic recognition. ...
Our classification accuracy of 99.5% in detecting insect sounds and 96.3% in detecting frog calls are significant improvements on results previously published for the same datasets. ...
Higher within the brainstem, the superior olivary complex engages in binaural processing, while other regions of the brainstem handle reflexive and emotional responses to sound. the cochlea and the neocortex ...
doi:10.1007/978-3-319-26561-2_42
fatcat:7ziv5kf3pvgi7kungkv2bly4ni
« Previous
Showing results 1 — 15 out of 468 results