Filters








16 Hits in 3.9 sec

MediaEval 2019: Emotion and Theme Recognition in Music Using Jamendo

Dmitry Bogdanov, Alastair Porter, Philip Tovstogan, Minz Won
2019 MediaEval Benchmarking Initiative for Multimedia Evaluation  
This paper provides an overview of the Emotion and Theme recognition in Music task organized as part of the MediaEval 2019 Benchmarking Initiative for Multimedia Evaluation.  ...  The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording by means of audio analysis.  ...  ACKNOWLEDGMENTS We are thankful to Jamendo for providing us the data and labels.  ... 
dblp:conf/mediaeval/BogdanovPTW19 fatcat:fhdtkmxrojei7oodp4xhdlky5a

MediaEval 2020: Emotion and Theme Recognition in Music Using Jamendo

Dmitry Bogdanov, Alastair Porter, Philip Tovstogan, Minz Won
2020 MediaEval Benchmarking Initiative for Multimedia Evaluation  
This paper provides an overview of the Emotions and Themes in Music task organized as part of the MediaEval 2020 Benchmarking Initiative for Multimedia Evaluation.  ...  The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording by means of audio analysis.  ...  ACKNOWLEDGMENTS We are thankful to Jamendo for providing us the data and labels.  ... 
dblp:conf/mediaeval/BogdanovPTW20 fatcat:f7ag4wm7mrd6zgeueunm3z36ou

MediaEval 2019 Emotion and Theme Recognition task: A VQ-VAE Based Approach

Hsiao-Tzu Hung, Yu-Hua Chen, Maximilian Mayerl, Michael Vötter, Eva Zangerle, Yi-Hsuan Yang
2019 MediaEval Benchmarking Initiative for Multimedia Evaluation  
In this paper, we, Taiinn (Taiwan) team, use pre-trained VQ-VAE as a feature extractor and compare two types of classifier for audiobased emotion and theme recognition.  ...  As for MTAT, we use it as the second test set (in addition to Jamendo) for testing VQ-VAE, and hence we split it into training, validation, and test sets.  ...  INTRODUCTION This paper describes our submission to the MediaEval 2019 Emotion and Theme recognition task [2] .  ... 
dblp:conf/mediaeval/HungCMVZY19 fatcat:coqckztzonaspl2dekzgvflvgq

Emotion and Theme Recognition of Music Using Convolutional Neural Networks

Shengzhou Yi, Xueting Wang, Toshihiko Yamasaki
2019 MediaEval Benchmarking Initiative for Multimedia Evaluation  
Our team, "YL-UTokyo", participated in the task: Emotion and Theme Recognition in Music Using Jamendo. The goal of this task is to recognize moods and themes conveyed by the audio tracks.  ...  INTRODUCTION We participated in one of the tasks in MediaEval 2019: Emotion and Theme Recognition in Music Using Jamendo [2] .  ...  The link to our source code: https://github.com/YiShengzhou12330379/Emotionand-Theme-Recognition-in-Music-Using-Jamendo. 4.0 International (CC BY 4.0).  ... 
dblp:conf/mediaeval/YiWY19 fatcat:6uclbox3cbhefi2hfpphe23jlu

Recognizing Song Mood and Theme Using Convolutional Recurrent Neural Networks

Maximilian Mayerl, Michael Vötter, Hsiao-Tzu Hung, Bo-Yu Chen, Yi-Hsuan Yang, Eva Zangerle
2019 MediaEval Benchmarking Initiative for Multimedia Evaluation  
In this year's MediaEval task, Emotion and Theme Recognition in Music Using Jamendo, the goal is to assign emotion and theme tags to songs.  ...  We use a neural network model consisting of both convolutional and recurrent layers and utilize spectral, highlevel as well as rhythm features.  ...  SUMMARY AND OUTLOOK In this paper, we described our approach to the Emotion and Theme Recognition in Music Using Jamendo task at MediaEval 2019.  ... 
dblp:conf/mediaeval/MayerlVHCYZ19 fatcat:lvhidvpznncfnbxaviwvfls75i

Recognizing Music Mood and Theme Using Convolutional Neural Networks and Attention

Alish Dipani, Gaurav Iyer, Veeky Baths
2020 MediaEval Benchmarking Initiative for Multimedia Evaluation  
We present the UAI-CNRL submission to MediaEval 2020 task on Emotion and Theme Recognition in Music.  ...  We make use of the ResNet34 architecture, coupled with a self-attention module to detect moods/themes in music tracks.  ...  Emotions and Themes in Music MediaEval'20, December 14-15 2020, Online  ... 
dblp:conf/mediaeval/DipaniIB20 fatcat:g22qnc5zvzbdvb3ycyotccczfu

MediaEval 2020 Emotion and Theme Recognition in Music Task: Loss Function Approaches for Multi-label Music Tagging

Dillon Knox, Timothy Greer, Benjamin Ma, Emily Kuo, Krishna Somandepalli, Shrikanth Narayanan
2020 MediaEval Benchmarking Initiative for Multimedia Evaluation  
We present USC SAIL's submission to the 2020 Emotions and Themes in Music challenge: an ensemble-based convolutional neural network (CNN) model trained using various loss functions.  ...  In this work, we investigate the effect of different loss functions and resampling strategies on prediction performance, finding that using focal loss improves overall performance on the provided imbalanced  ...  from Google and the U.S.  ... 
dblp:conf/mediaeval/KnoxGMKSN20 fatcat:bzrq6qpmm5c5bggujeknypjt2q

Emotion and Themes Recognition in Music Utilising Convolutional and Recurrent Neural Networks

Shahin Amiriparian, Maurice Gerczuk, Eduardo Coutinho, Alice Baird, Sandra Ottl, Manuel Milling, Björn W. Schuller
2019 MediaEval Benchmarking Initiative for Multimedia Evaluation  
Our best performing model (team name: AugLi) achieves 74.2 % ROC-AUC on the test partition which is 1.6 percentage points over the baseline system of the MediaEval 2019 Emotion & Themes in Music task.  ...  In this study, we present our fusion system of end-to-end convolutional recurrent neural networks (CRNN) and pre-trained convolutional feature extractors for music emotion and theme recognition 1 .  ...  In this paper, we introduce our end-to-end architecture for the task of emotion and theme recognition in music at MediaEval 2019 [7] . models capture both shift-invariant, high-level features (convolutional  ... 
dblp:conf/mediaeval/AmiriparianGCBO19 fatcat:mlxgyc24rzh75mtxwc5kmus75i

Music theme recognition using CNN and self-attention [article]

Manoj Sukhavasi, Sainath Adapa
2019 arXiv   pre-print
Our model (team name: AMLAG) achieves 4th place on PR-AUC-macro Leaderboard in MediaEval 2019: Emotion and Theme Recognition in Music Using Jamendo.  ...  We present an efficient architecture to detect mood/themes in music tracks on autotagging-moodtheme subset of the MTG-Jamendo dataset.  ...  MediaEval 2019: Emotion and Theme Recognition in Music Using Jamendo aims to improve the machine learning algorithms to automatically recognize the emotions and themes conveyed in a music recording [3  ... 
arXiv:1911.07041v1 fatcat:s4k2ob5wojh3nabrey5um5tc7e

Emotion and Theme Recognition in Music with Frequency-Aware RF-Regularized CNNs [article]

Khaled Koutini, Shreyan Chowdhury, Verena Haunschmid, Hamid Eghbal-zadeh, Gerhard Widmer
2019 arXiv   pre-print
We present CP-JKU submission to MediaEval 2019; a Receptive Field-(RF)-regularized and Frequency-Aware CNN approach for tagging music with emotion/mood labels.  ...  We improve the performance of such architectures using techniques such as Frequency Awareness and Shake-Shake regularization, which were used in previous work on general acoustic recognition tasks.  ...  The Emotion and Theme Recognition Task of MediaEval 2019 uses a subset of this dataset containing relevant emotion tags, and the task objective is to predict scores and decisions for these tags from audio  ... 
arXiv:1911.05833v1 fatcat:lp23k3stmna5rkcgcdablktvse

Semi-supervised music emotion recognition using noisy student training and harmonic pitch class profiles [article]

Hao Hao Tan
2021 arXiv   pre-print
We present Mirable's submission to the 2021 Emotions and Themes in Music challenge.  ...  In this work, we intend to address the question: can we leverage semi-supervised learning techniques on music emotion recognition?  ...  Media- Eval 2021: Emotion and Theme Recognition in Music Using Jamendo. In Proc. of the MediaEval 2021 Workshop, Online, 13-15 December 2021.  ... 
arXiv:2112.00702v2 fatcat:nmedslhhmzdjxfuchbgeglheua

Receptive-Field Regularized CNNs for Music Classification and Tagging [article]

Khaled Koutini, Hamid Eghbal-Zadeh, Verena Haunschmid, Paul Primus, Shreyan Chowdhury, Gerhard Widmer
2020 arXiv   pre-print
Convolutional Neural Networks (CNNs) have been successfully used in various Music Information Retrieval (MIR) tasks, both as end-to-end models and as feature extractors for more complex systems.  ...  In particular, we analyze the recently introduced Receptive-Field Regularization and Shake-Shake, and show that they significantly improve the generalization of deep CNNs on music-related tasks, and that  ...  The data comes from a subset of the MTG-Jamendo dataset [30] released in the Emotion and Theme Recognition in Music Task at the MediaEval-2019 Benchmark [31] .  ... 
arXiv:2007.13503v1 fatcat:tayceqsc3ra2xailpcmwy33nhi

MusiCoder: A Universal Music-Acoustic Encoder Based on Transformers [article]

Yilun Zhao, Xinda Wu, Yuqing Ye, Jia Guo, Kejun Zhang
2020 arXiv   pre-print
Music annotation has always been one of the critical topics in the field of Music Information Retrieval (MIR). Traditional models use supervised learning for music annotation tasks.  ...  The results show that MusiCoder outperforms the state-of-the-art models in both music genre classification and auto-tagging tasks.  ...  The MusiCoder model was compared with other state-of-the-art models competing in the challenge of MediaEval 2019: Emotion and Theme Recognition in Music Using Jamendo [4] .  ... 
arXiv:2008.00781v1 fatcat:ipszwgwtzvfszda3wpnamfrkfa

Receptive Field Regularization Techniques for Audio Classification and Tagging with Deep Convolutional Neural Networks

Khaled Koutini, Hamid Eghbal-zadeh, Gerhard Widmer
2021 IEEE/ACM Transactions on Audio Speech and Language Processing  
The proposed CNNs achieve state-of-the-art results in multiple tasks, from acoustic scene classification to emotion and theme detection in music to instrument recognition, as demonstrated by top ranks  ...  in several pertinent challenges (DCASE, MediaEval).  ...  We thank the members of the Institute of Computational Perception for the useful discussions and feedback.  ... 
doi:10.1109/taslp.2021.3082307 fatcat:5qzi63vw7vemfecalp2t2tja74

Toward a Musical Sentiment (MuSe) Dataset for Affective Distant Hearing

Christopher Akiki, Manuel Burghardt
2020 Workshop on Computational Humanities Research  
In this short paper we present work in progress that tries to leverage crowdsourced music metadata and crowdsourced affective word norms to create a comprehensive dataset of music emotions, which can be  ...  used for sentiment analyses in the music domain.  ...  Among other kinds of information, the dataset also contains mood annotations, which are also used in the MediaEval 1 task on emotion and theme recognition in music.  ... 
dblp:conf/chr/AkikiB20 fatcat:j52dxboafvgatij57uqo3tp66m
« Previous Showing results 1 — 15 out of 16 results