Filters








8,344 Hits in 8.9 sec

Multi-Objective Investigation of Six Feature Source Types for Multi-Modal Music Classification

Igor Vatolkin, Cory McKay
2022 Transactions of the International Society for Music Information Retrieval  
These techniques permit an exploration of how different modalities and feature types contribute to class discrimination.  ...  In contrast to more typical MIR setups where supervised classification models are trained on only one or two types of data, we propose a more diversified approach to music classification and analysis based  ...  ACKNOWLEDGEMENTS Portions of this work were generously supported by (1) the German Research Foundation (DFG), under grant 336599081, and (2) the Fonds de recherche du Québec -Société et culture, under  ... 
doi:10.5334/tismir.67 fatcat:glhczbyvfjgabmergrixspzs3m

Multi-Modal Integration of EEG-fNIRS for Characterization of Brain Activity Evoked by Preferred Music

Lina Qiu, Yongshi Zhong, Qiuyou Xie, Zhipeng He, Xiaoyun Wang, Yingyue Chen, Chang'an A. Zhan, Jiahui Pan
2022 Frontiers in Neurorobotics  
For the multi-modal features of EEG and fNIRS, we proposed an improved Normalized-ReliefF method to fuse and optimize them and found that it can effectively improve the accuracy of distinguishing between  ...  Our work provides an objective reference based on neuroimaging for the research and application of personalized music therapy.  ...  AUTHOR CONTRIBUTIONS ACKNOWLEDGMENTS We thank the reviewers for excellent criticism of the article.  ... 
doi:10.3389/fnbot.2022.823435 pmid:35173597 pmcid:PMC8841473 fatcat:wati5wkwsfcibjnywyx6hqxfri

Multi-modal sensing and analysis of poster conversations with smart posterboard

Tatsuya Kawahara, Takuma Iwatate, Koji Inoue, Soichiro Hayashi, Hiromasa Yoshimoto, Katsuya Takanashi
2016 APSIPA Transactions on Signal and Information Processing  
We have developed a smart posterboard for multi-modal recording and analysis of poster conversations.  ...  Moreover, high-level indexing of interest and comprehension level of the audience is explored based on the multi-modal behaviors during the conversation.  ...  Combination of multi-modal information sources was investigated to enhance the performance. First, multi-modal behaviors prior to turn-taking events were investigated.  ... 
doi:10.1017/atsip.2016.2 fatcat:thsnh2j5abevzjae77nqcy57pq

When Lyrics Outperform Audio For Music Mood Classification: A Feature Analysis

Xiao Hu 0001, J. Stephen Downie
2010 Zenodo  
In this paper, we continue our previous work on multi-modal mood classification [4] and go one step further to investigate these research questions: 1) Which source is more useful in music classification  ...  After identifying the best lyric feature types, audio-based, lyricbased as well as multi-modal classification systems were compared.  ... 
doi:10.5281/zenodo.1415540 fatcat:6szbhsuvijctlomvn6qa5rwloy

Music Emotion Recognition: From Content- to Context-Based Models [chapter]

Mathieu Barthet, György Fazekas, Mark Sandler
2013 Lecture Notes in Computer Science  
First, this paper provides a thorough review of studies on the relation of music and emotions from di↵erent disciplines.  ...  The striking ability of music to elicit emotions assures its prominent status in human culture and every day life.  ...  This work was partly funded by the TSB project 12033-76187 "Making Musical Mood Metadata" (TS/J002283/1). The first author wishes to thank Christopher Jack for proofreading the article.  ... 
doi:10.1007/978-3-642-41248-6_13 fatcat:24xnjll7pzfaxmtwhh2sr4rwlm

A computational lens into how music characterizes genre in film

Benjamin Ma, Timothy Greer, Dillon Knox, Shrikanth Narayanan, Stavros Ntalampiras
2021 PLoS ONE  
This work adds to our understanding of music's use in multi-modal contexts and offers the potential for future inquiry into human affective experiences.  ...  We investigate the interaction between musical and visual features with a cross-modal analysis, and do not find compelling evidence that music characteristic of a certain genre implies low-level visual  ...  Contribution of this work In this work, we objectively examine the effect of musical features on perception of film.  ... 
doi:10.1371/journal.pone.0249957 pmid:33831109 fatcat:wfny65oy5jg7tltefq3kngs77i

Emotion Embedding Spaces for Matching Music to Stories

Minz Won, Justin Salamon, Nicholas J. Bryan, Gautham Mysore, Xavier Serra
2021 Zenodo  
Content creators often use music to enhance their stories, as it can be a powerful tool to convey emotion. In this paper, our goal is to help creators find music to match the emotion of their story.  ...  ., books), use multiple sentences as input queries, and automatically retrieve matching music. We formalize this task as a cross-modal text-to-music retrieval problem.  ...  We propose six different deep learning strategies to extract relevant features and bridge the modality gap between text and music including (1) classification (2) multi-head classification (3) valence-arousal  ... 
doi:10.5281/zenodo.5624482 fatcat:uqlm3s5korb5rm2ybkbvr42qpi

Learning Multimodal Latent Attributes

Yanwei Fu, Timothy M. Hospedales, Tao Xiang, Shaogang Gong
2014 IEEE Transactions on Pattern Analysis and Machine Intelligence  
model for learning multi-modal semi-latent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort.  ...  The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques.  ...  Figure 6 . 6 Exploiting multi-modality: LATM vs M2LATM for USAA dataset. Left: Multi-task classification.  ... 
doi:10.1109/tpami.2013.128 pmid:24356351 fatcat:tlchipuvl5evflsw6ewqs2oqyu

Statistical and Visual Analysis of Audio, Text, and Image Features for Multi-Modal Music Genre Recognition

Ben Wilkes, Igor Vatolkin, Heinrich Müller
2021 Entropy  
We present a multi-modal genre recognition framework that considers the modalities audio, text, and image by features extracted from audio signals, album cover images, and lyrics of music tracks.  ...  Genre recognition is performed by binary classification of a music track with respect to each genre based on combinations of elementary features.  ...  For a recent overview, we refer to Simonetta et al. [17] . Most studies on multi-modal music classification combine two sources. Audio, together with lyrics, seems to be the most frequent case.  ... 
doi:10.3390/e23111502 pmid:34828199 pmcid:PMC8621318 fatcat:fibxn23ayvhzxgkcoe2dp6cbsy

Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information

Ghoniem, Algarni, Shaalan
2019 Information  
To overcome issues of feature extraction and multi-modal fusion, hybrid fuzzy-evolutionary computation methodologies are employed to demonstrate ultra-strong capability of learning features and dimensionality  ...  In all likelihood, while features from several modalities may enhance the classification performance, they might exhibit high dimensionality and make the learning process complex for the most used machine  ...  Deep belief network SVM 85.69% for Enterface05 dataset in case of multi-modal classification based upon six discrete emotions. 91.3% and 91.8%, respectively, for binary arousal-valence model  ... 
doi:10.3390/info10070239 fatcat:b4bq47h5cjckxlij4j6iiuxafu

Affective Computing for Large-scale Heterogeneous Multimedia Data

Sicheng Zhao, Shangfei Wang, Mohammad Soleymani, Dhiraj Joshi, Qiang Ji
2019 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)  
We then summarize and compare the representative methods on AC of different multimedia types, i.e., images, music, videos, and multimodal data, with the focus on both handcrafted features-based methods  ...  The wide popularity of digital photography and social networks has generated a rapidly growing volume of multimedia data (i.e., image, music, and video), resulting in a great demand for managing, retrieving  ...  Multi-level deep representations (MldrNet) are learned in [101] for image emotion classification.  ... 
doi:10.1145/3363560 fatcat:m56udtjlxrauvmj6d5z2r2zdeu

Video genre categorization and representation using audio-visual information

Bogdan Ionescu
2012 Journal of Electronic Imaging (JEI)  
correct classification of up to 97%.  ...  The main contribution of our work lies in harnessing the descriptive power of the combination of these descriptors in genre classification.  ...  We also acknowledge the 2011 Genre Tagging Task of the MediaEval Multimedia Benchmark [2] for providing the test data set.  ... 
doi:10.1117/1.jei.21.2.023017 fatcat:ftmpjzlx5rdndbcmwiqx7jktga

Sports Video Analysis: Semantics Extraction, Editorial Content Creation and Adaptation

Changsheng Xu, Jian Cheng, Yi Zhang, Yifan Zhang, Hanqing Lu
2009 Journal of Multimedia  
We first propose a generic multi-layer and multi-modal framework for sports video analysis.  ...  Then we introduce several mid-level audio/visual features which are able to bridge the semantic gap between low-level features and high-level understanding.  ...  The framework starts with the low-level feature extraction from the source video. Three kinds of modals of low-level features including visual, audio and text can be directly obtained.  ... 
doi:10.4304/jmm.4.2.69-79 fatcat:xytusontr5cyxlxpyqgljnhkqu

A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances [article]

Yan Wang, Wei Song, Wei Tao, Antonio Liotta, Dawei Yang, Xinlei Li, Shuyong Gao, Yixuan Sun, Weifeng Ge, Wei Zhang, Wenqiang Zhang
2022 arXiv   pre-print
Thus, the fusion of physical information and physiological signals can provide useful features of emotional states and lead to higher accuracy.  ...  baseline dataset, fusion strategies for multimodal affective analysis, and unsupervised learning models.  ...  Lugger and Yang [172] investigated the effect of prosodic features, voice quality parameters, and different combinations of both types on emotion classification.  ... 
arXiv:2203.06935v3 fatcat:h4t3omkzjvcejn2kpvxns7n2qe

A multi-modal dance corpus for research into interaction between humans in virtual environments

Slim Essid, Xinyu Lin, Marc Gowing, Georgios Kordelas, Anil Aksay, Philip Kelly, Thomas Fillon, Qianni Zhang, Alfred Dielmann, Vlado Kitanovski, Robin Tournemenne, Aymeric Masurelle (+4 others)
2012 Journal on Multimodal User Interfaces  
Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation.  ...  As the dance corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised  ...  Acknowledgments The authors and 3DLife would like to acknowledge the support of Huawei in the creation of this dataset.  ... 
doi:10.1007/s12193-012-0109-5 fatcat:4lt7adj3qzb4tlk274muawcmde
« Previous Showing results 1 — 15 out of 8,344 results