10,517 Hits in 8.2 sec

On the role of user-generated metadata in audio visual collections

Riste Gligorov, Michiel Hildebrand, Jacco van Ossenbruggen, Guus Schreiber, Lora Aroyo
2011 Proceedings of the sixth international conference on Knowledge capture - K-CAP '11  
The results suggest that the tags complement the metadata provided by professional cataloguers, the tags describe both the audio and the visual aspects of the video, and the users primarily describe objects  ...  We report on an analysis of the tags collected with Waisda?.  ...  We also like to thank Q42 for the development of Waisda? and making the collected data available.  ... 
doi:10.1145/1999676.1999702 dblp:conf/kcap/GligorovHOSA11 fatcat:4mavz7s5tvcn7bqigshkfgwt5y

The 2019 Multimedia for Recommender System Task: MovieREC and NewsREEL at MediaEval

Yashar Deldjoo, Benny Kille, Markus Schedl, Andreas Lommatzsch, Jialie Shen
2019 MediaEval Benchmarking Initiative for Multimedia Evaluation  
In this task, participants use a wealth of information from text, images, and audio to predict the success of items.  ...  Thereby, we advance the state-of-the-art of content-based recommender systems by leveraging multimedia content.  ...  CBF-metadata models, on the other hand, solely resort to metadata (editorial or user-generated) to generate recommendations, disregarding the perception of media content [5] .  ... 
dblp:conf/mediaeval/DeldjooKSLS19 fatcat:s4v7gc622ndsjfg6ca7rs5pbhq

Toward a Structural and Semantic Metadata Framework for Efficient Browsing and Searching of Web Videos

Hyun-Hee Kim
2017 Journal of the Korean Society for Library and Information Science  
Precisely, the metadata framework was constructed on the basis of Chatman's narrative theory, three multimedia metadata formats (PBCore, MPEG-7, and TV-Anytime), and social metadata.  ...  This study proposed a structural and semantic framework for the characterization of events and segments in Web videos that permits content-based searches and dynamic video summarization.  ...  visual/audio Information for basic visual features (e.g., color, or object recognition) and audio content (e.g., melody contour) visual and audio information userPreference User preference information  ... 
doi:10.4275/kslis.2017.51.1.227 fatcat:3zlaebvw5vasph462xxjeea6vy

Overview of the MPEG-7 standard

Shih-Fu Chang, T. Sikora, A. Purl
2001 IEEE transactions on circuits and systems for video technology (Print)  
MPEG-7, formally known as Multimedia Content Description Interface, includes standardized tools (descriptors, description schemes, and language) enabling structural, detailed descriptions of audio-visual  ...  information at different granularity levels (region, image, video segment, collection) and in different areas (content description, management, organization, navigation, and user interaction).  ...  As in the case of Ds, DSs can be categorized to audio, visual, or generic. Generic DSs usually represent generic meta information related to all kinds of media (audio, visual, text, graphic, etc).  ... 
doi:10.1109/76.927421 fatcat:ghj4a2ou2bdcxdjv3surnpvisq

An Intelligent Data Repository Consolidating Artifacts of Music Learning

Michael Kalochristianakis, Panagiotis Zervas, Chrisoula Alexandraki
2022 Zenodo  
This paper presents current and ongoing developments towards the implementation of a web-based data repository that is tailored to the needs of networked mediated music education.  ...  A distinct focus of this research concerns the integration of intelligent functionalities for sound analysis, description, and processing with the aim of providing efficient mechanisms for search, retrieval  ...  The role of each user can be retrieved using structured queries from the role assignment table in the database of the LMS, provided that the identification id of the user is known.  ... 
doi:10.5281/zenodo.6768630 fatcat:7fel25qwmbe2pntu57doayv4ta

Semantic audio content-based music recommendation and visualization based on user preference examples

Dmitry Bogdanov, Martín Haro, Ferdinand Fuhrmann, Anna Xambó, Emilia Gómez, Perfecto Herrera
2013 Information Processing & Management  
Semantic audio content-based music recommendation and visualization based on user preference examples.  ...  The former relies on user surveys in order to obtain qualitative statements and ratings about particular items or more general semantic properties of the data.  ...  Acknowledgements The authors thank all participants involved in the evaluation and Justin Salamon for proofreading.  ... 
doi:10.1016/j.ipm.2012.06.004 fatcat:y6bcjarzandwljl5qxqruv5yh4

Curating Ethnomusicology in Cyberworlds for Ethnomusicological Research

Michael Frishkopf, Michael Cohen, Rasika Ranaweera
2015 Ethnologies  
Acknowledgment Development of WMiW was funded primarily by the Social Sciences and Humanities Research Council of Canada, with additional support from folkwaysAlive! at the University of Alberta.  ...  The user may also embark on a tour, using a window like that shown in the lower right. The metadata window shows details of the musical track last clicked by the user. scores. Figure 4 .  ...  learning, and aesthetic contemplation, as well as contributing towards our general understanding of the role of music in human interaction and community formation.  ... 
doi:10.7202/1039658ar fatcat:y3zof6lswvfwbmwma3pyfkpnju

Playsom And Pocketsomplayer, Alternative Interfaces To Large Music Collections

Robert Neumayer, Michael Dittenbach, Andreas Rauber
2005 Zenodo  
The need for advanced visualization to support selection of audio tracks in ever larger audio collection was also addressed in Torrens et al. (2004) , whereby different representation techniques of grouping  ...  audio by metadata attributes using Tree-Maps and a disc visualization is presented.  ... 
doi:10.5281/zenodo.1414817 fatcat:gea7rx33l5awfe3plr2tv4vi7a

Sensate abstraction: hybrid strategies for multi-dimensional data in expressive virtual reality contexts

Ruth West, Joachim Gossmann, Todd Margolis, Jurgen P. Schulze, J. P. Lewis, Ben Hackbarth, Iman Mostafavi, Ian E. McDowall, Margaret Dolinsky
2009 The Engineering Reality of Virtual Reality 2009  
It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development.  ...  The installation creates a visceral experience of the abstraction of nature in to vast data collections -a practice that connects expeditionary science of the 19th Century with 21st Century expeditions  ...  Girardo, Sam Fernald, Toshiro Yamada, and student researchers from both the UCSD ECE 191 design practicum and UCSD Computer Science.  ... 
doi:10.1117/12.806928 fatcat:ssz77u5g2ngo5fx5sq54f4s76i

Social multimedia: highlighting opportunities for search and mining of multimedia data in social media applications

Mor Naaman
2010 Multimedia tools and applications  
The approach is based on the experience of building a number of successful applications that are based on mining multimedia content analysis in social multimedia context.  ...  Uploaded by individual participants, content in these immense pools of content is accompanied by varied types of metadata, such as social network data or descriptive textual information.  ...  Research Berkeley, whose ideas, expertise and excitement made this work possible-or, in fact, made this work [period]. In particular, Lyndon Kennedy made many of the key contributions described here.  ... 
doi:10.1007/s11042-010-0538-7 fatcat:4rolpe4c35gbdhgrsvowywao4i

Exploratory Search in an Audio-Visual Archive: Evaluating a Professional Search Tool for Non-Professional Users

Marc Bron, Jasmijn van Gorp, Frank Nack, Maarten de Rijke
2011 European Workshop on Human-Computer Interaction and Information Retrieval  
A more direct presentation of entities present in the metadata fields of items in a result list can be beneficial for non-professional users on exploratory search tasks.  ...  We conduct a small-scale user study where non-professionals perform exploratory search tasks with a search tool originally developed for media professionals and archivists in an audio visual archive.  ...  Such tools were originally developed to support professional users in searching through the metadata descriptions in a collection.  ... 
dblp:conf/eurohcir/BronGNR11 fatcat:dqgojurixnbr5pm3drodoamldi

User-generated metadata in audio-visual collections

Riste Gligorov
2012 Proceedings of the 21st international conference companion on World Wide Web - WWW '12 Companion  
To this end, we address the following four issues. First, we perform a comparative analysis between user-generated tags and professional annotations in terms of what aspects of videos they describe.  ...  launched by the Netherlands Institute for Sound and Vision. The goal of this PhD research is to investigate the value of the user tags collected with this video labeling game.  ...  We also like to thank Q42 for the development of Waisda? and making the collected data available.  ... 
doi:10.1145/2187980.2187998 dblp:conf/www/Gligorov12 fatcat:odsdd7wrbjfadi6pfyr6ptseem

Hear the World's Sounds: Locality as Metadata in Two Music Platforms

Michael Audette-Longo
2017 Imaginations: Journal of Cross-Cultural Media Studies  
The platform provides users with the option to embed one hashtag in the audio player.  ...  The visualization of waveform typical in Soundcloud's audio player can be seen in the below screengrab of the track "Pools of Iris" associated with the account of the Ottawa-based independent electro-pop  ... 
doi:10.17742/image.ld.8.2.7 fatcat:7ky5oelyijbxbiiolhvbjacgp4

Backend Infrastructure Supporting Audio Augmented Reality and Storytelling [chapter]

Kari Salo, Diana Giova, Tommi Mikkonen
2016 Lecture Notes in Computer Science  
We have successfully implemented ADAM system and evaluated it in the Museum of Technology in Helsinki, Finland.  ...  In augmented reality (AR) and interactive digital storytelling (IDS) systems, visual presentation has been dominant. In contrast to this trend, we have chosen to concentrate on auditory presentation.  ...  We thank Outi Putkonen and Riina Linna from the Museum of Technology for all the support.  ... 
doi:10.1007/978-3-319-40397-7_31 fatcat:uqrezpg3ybe5jgnac34tinei6i

Visual Collaging Of Music In A Digital Library

David Bainbridge 0001, Sally Jo Cunningham, J. Stephen Downie
2004 Zenodo  
On the opposite end of the spectrum, the music collaging technique could equally run as a Personal Digital Assistant (PDA) enhancement for users to access their MP3 collections.  ...  We also envision implementations of the music collaging system that are presented on large, touch-sensitive screens located in record stores and music libraries.  ...  The use of visual surrogates for audio files in a collage also seems appropriate given evidence that appearance plays a significant role in the organization and display of personal music collections-the  ... 
doi:10.5281/zenodo.1415832 fatcat:h7ut3vxjgbhmve2wcoslfnjgva
« Previous Showing results 1 — 15 out of 10,517 results