Information Mining from Multimedia Databases

Ling Guan, Horace HS Ip, Paul H Lewis, Hau San Wong, Paisarn Muneesawang
<span title="2006-02-23">2006</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hqblpsydr5emviirtqkkmctpha" style="color: black;">EURASIP Journal on Advances in Signal Processing</a> </i> &nbsp;
Welcome to the special issue on "Information mining from multimedia databases." The main focus of this issue is on information mining techniques for the extraction and interpretation of semantic contents in multimedia databases. The advances in multimedia production technologies have resulted in a rapid proliferation of various forms of media data types on the Internet. Given these high volumes of multimedia data, it is thus essential to extract and interpret their underlying semantic contents
more &raquo; ... rom the original signal-based representations without the need for extensive user interaction, and the technique of multimedia information mining plays an important role in this automatic content interpretation process. Due to the spatio-temporal nature of most multimedia data streams, an important requirement for this information mining process is the accurate extraction and characterization of salient events from the original signal-based representation, and the discovery of possible relationships between these events in the form of high-level association rules. The availability of these high-level representations will play an important role in applications such as content-based multimedia information retrieval, preservation of cultural heritage, surveillance, and automatic image/video annotation. For these problems, the main challenges are in the design and analysis of mapping techniques between the signal-level and semantic-level representations, and the adaptive characterization of the notion of saliency for multimedia events in view of its dependence on the preferences of individual users and specific contexts. The focus of the first two papers is on the automatic analysis and interpretation of video contents. X.-P. Zhang and Chen describe a new approach to extracting objects from video sequences which is based on spatio-temporal independent component analysis and multiscale analysis. Specifically, spatio-temporal independent component analysis is first performed to identify a set of preliminary source images which contain moving objects. These data are then further processed using wavelet-based multiscale analysis to improve the accuracy of video object extraction. Liu et al. propose a new approach for performing semantic analysis and annotation of basketball video. The technique is based on the extraction and analysis of multimodal features which include visual, motion, and audio information. These features are first combined to form a low-level representation of the video sequence. Based on this representation, they then utilize domain information to detect interesting events, such as when a player performs a successful shot at the basket or when a penalty is imposed for rule violation, in the basketball video. The topic of the next two papers is on video analysis in the compressed domain. Hesseler and Eickeler propose a set of algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Based on the extracted motion vector field, these algorithms can infer the correct camera motion, allow motion detection within a limited region of interest for the purpose of object tracking, and perform cut detection. In the next paper, Fonseca and Nesvadba introduce a new technique for face detection and tracking in the compressed domain. In particular, face detection is performed using DCT coefficients only, and motion information is extracted based on the forward and backward motion vectors. The low computational requirement of the proposed technique facilitates its adoption on mobile platforms. The next two papers describe new information mining techniques based on the extraction and characterization of audio features. Radhakrishnan et al. propose a contentadaptive representation framework for event discovery using audio features from "unscripted" multimedia such as sports and surveillance data. Based on the assumption that interesting events occur infrequently in a background of uninteresting events, the audio sequence is regarded as a time series,
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/asp/2006/49073">doi:10.1155/asp/2006/49073</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/louxnv5c5bggrdeyvjw6qavi4m">fatcat:louxnv5c5bggrdeyvjw6qavi4m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170812200021/https://asp-eurasipjournals.springeropen.com/track/pdf/10.1155/ASP/2006/49073?site=asp.eurasipjournals.springeropen.com" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/dc/52/dc52ad99c0a5ebdaaf6ce0c3e44b7a24afc7464c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/asp/2006/49073"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> hindawi.com </button> </a>