3,816 Hits in 7.5 sec

Table of Contents

2020 IEEE transactions on multimedia  
Guo 2672 Big Data Analytics on Multimedia Data and Crowd Sourcing for Multimedia Applications Character-Oriented Video Summarization With Visual and Textual Cues . . . . . . . . . . . . . . . . . . . .  ...  Image/Video/Graphics Analysis and Synthesis Variational Single Image Dehazing for Enhanced Visualization . . . . . . F. Fang, T. Wang, Y. Wang, T. Zeng, and G.  ... 
doi:10.1109/tmm.2020.3020756 fatcat:low77squq5euveow7m3jjv75eq

Words Matter: Scene Text for Image Classification and Retrieval

Sezer Karaoglu, Ran Tao, Theo Gevers, Arnold W. M. Smeulders
2017 IEEE transactions on multimedia  
Combining the proposed textual and visual cues outperforms visual only classification and retrieval by a large margin.  ...  Second, to extract the textual cues, a generic and fully unsupervised word box proposal method is introduced.  ...  Video captions (textual) are extensively used in combination with visual cues for video classification. An overview of these methods can be found in [28] .  ... 
doi:10.1109/tmm.2016.2638622 fatcat:5einurcv2vhxhfw2vvttca4xte

The use of video source in analogical problem solving in two experimental studies

Mongsong Goh, Ai-Girl Tan, William Choy
2012 Procedia - Social and Behavioral Sciences  
In experiment 2, 70 subjects watched a video and solved a social interaction problem under the conditions of with and without cue to the video source analogue.  ...  In experiment 1, 70 subjects read a story (source analogue) and solved a social interaction problem under two conditions (with and without cues to the source analogue).  ...  Table 1 summarizes means and standard deviations of scores for experiment 1 in three conditions: Without cue, with cue and control.  ... 
doi:10.1016/j.sbspro.2011.12.108 fatcat:ce5mh542nfecfkq7bg7ogdxcum

Towards multimodal sentiment analysis

Louis-Philippe Morency, Rada Mihalcea, Payal Doshi
2011 Proceedings of the 13th international conference on multimodal interfaces - ICMI '11  
With more than 10,000 new videos posted online every day on social websites such as YouTube and Facebook, the internet is becoming an almost infinite source of information.  ...  This paper addresses the task of multimodal sentiment analysis, and conducts proof-of-concept experiments that demonstrate that a joint model that integrates visual, audio, and textual features can be  ...  Acknowledgments The authors are grateful to the three annotators who helped with the sentiment annotations.  ... 
doi:10.1145/2070481.2070509 dblp:conf/icmi/MorencyMD11 fatcat:aotezmrt2fgjbdbc7vr5djn3cm

Socially motivated multimedia topic timeline summarization

Mathilde Sahuguet, Benoit Huet
2013 Proceedings of the 2nd international workshop on Socially-aware multimedia - SAM '13  
Contrasting with traditional man-made topic summarization which provide the personal view of its author, we want to focus on public reaction to events.  ...  Each event, relevant to the specified topic, is illustrated on a timeline by videos mined from social media sharing platforms that gives context to the events and offers an overview of what has caught  ...  We aim at building a time oriented visual summary of events, using videos to illustrate events along a timeline.  ... 
doi:10.1145/2509916.2509925 dblp:conf/mm/SahuguetH13 fatcat:wipwaymqvzejblaq4pjos5tt3e

SalAd: A Multimodal Approach for Contextual Video Advertising

Chen Xiang, Tam V. Nguyen, Mohan Kankanhalli
2015 2015 IEEE International Symposium on Multimedia (ISM)  
In this regard, our selected ads are contextually relevant to online video content in terms of both textual information and visual content.  ...  with online videos.  ...  The textual information is the drastic summarization of the video, and visual content reflects user's attention directly.  ... 
doi:10.1109/ism.2015.75 dblp:conf/ism/XiangNK15 fatcat:h24o2ws3n5hvtdof44zmbrgvia

Exploiting subclass information in one-class support vector machine for video summarization

Vasileios Mygdalis, Alexandros Iosifidis, Anastasios Tefas, Ioannis Pitas
2015 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
A user attention model is proposed in [10] , where visual, audio and textual features are extracted by applying multimodal analysis.  ...  A textual cue representation detecting text presence within the video frames and extracting the corresponding video segments is proposed in [7] .  ... 
doi:10.1109/icassp.2015.7178373 dblp:conf/icassp/MygdalisITP15 fatcat:wqasrl54vzbcxbsuz5mdae7d3e


Snehal S Gaikwad ., S. L. Nalbalwar .
2019 International Journal of Engineering Applied Sciences and Technology  
Multilingual character detection and recognition from video subtitles, scenes and documents is additionally getting high consideration on this subject.  ...  Different workshops and conferences are being sorted out on global level giving further ascent to advancements in field of character detection and recognition.  ...  The experimental performs well for character recognition with recall of 79%. The results can be improved by combining the textual and visual cues which gives better results for logo retrieval [8] .  ... 
doi:10.33564/ijeast.2019.v04i03.062 fatcat:6tswkhkmwbcfnne6tk6z2jm6jm

Towards auto-documentary

Pinar Duygulu, Jia-Yu Pan, David A. Forsyth
2004 Proceedings of the 12th annual ACM international conference on Multimedia - MULTIMEDIA '04  
The proposed method exploits both visual cues and textual information to summarize evolving news stories.  ...  News videos constitute an important source of information for tracking and documenting important events.  ...  How do we make smart use of the multi-modal (visual and textual) information in video clips?  ... 
doi:10.1145/1027527.1027719 dblp:conf/mm/DuyguluPF04 fatcat:4ovas2xc5jexlf5hheriwscpf4

Towards Micro-video Understanding by Joint Sequential-Sparse Modeling

Meng Liu, Liqiang Nie, Meng Wang, Baoquan Chen
2017 Proceedings of the 2017 ACM on Multimedia Conference - MM '17  
Like the traditional long videos, micro-videos are the unity of textual, acoustic, and visual modalities. These modalities sequentially tell a real-life event from distinct angles.  ...  In the light of this, we have to characterize and jointly model the sparseness and multiple sequential structures for better micro-video understanding.  ...  Sequence in the Textual Modality. The textual descriptions of micro-videos, including user generated text and hashtags, can provide strong cues on micro-video venue estimation.  ... 
doi:10.1145/3123266.3123341 dblp:conf/mm/LiuNWC17 fatcat:al3o7oazqjaptmlu2djhps23ey


Jianglong Zhang, Liqiang Nie, Xiang Wang, Xiangnan He, Xianglin Huang, Tat Seng Chua
2016 Proceedings of the 2016 ACM on Multimedia Conference - MM '16  
In particular, we first crawl a representative set of micro-videos from Vine and extract a rich set of features from textual, visual and acoustic modalities.  ...  According to our statistics on over 2 million micro-videos, only 1.22% of them are associated with venue information, which greatly hinders the location-oriented applications and personalized services.  ...  videos by fusing the textual metadata and visual or acoustic cues [12, 8] .  ... 
doi:10.1145/2964284.2964307 dblp:conf/mm/ZhangNWHHC16 fatcat:bjevfopncjh2th5e24dn3drq5a

A Multimodal Scheme for Program Segmentation and Representation in Broadcast Video Streams

Jinqiao Wang, Lingyu Duan, Qingshan Liu, Hanqing Lu, J.S. Jin
2008 IEEE transactions on multimedia  
The scheme aims to recover the temporal and structural characteristics of TV programs with visual, auditory, and textual information.  ...  In terms of visual cues, we develop a novel concept named program-oriented informative images (POIM) to identify the candidate points correlated with the boundaries of individual programs.  ...  Different from the scene-based approaches [10] , our solution makes use of program-level broadcast video production knowledge, which is characterized by explicit structural information and more rich program-oriented  ... 
doi:10.1109/tmm.2008.917362 fatcat:olc7ct3eyrftviv4a6ervidxx4

A theory of multiformat communication: mechanisms, dynamics, and strategies

Jordan W. Moffett, Judith Anne Garretson Folse, Robert W. Palmatier
2020 Journal of the Academy of Marketing Science  
(e.g., face-to-face, email) rather than digital or characteristic-level (e.g., visual cues, synchronicity) design decisions.  ...  to identify any gaps (e.g., AI agents, simulated cues).  ...  ., when coupled with proximal and visual cues), verbal cues can indicate competence (e.g., knowledge, skills) and problem-solving orientation (e.g., engaged, proactive), as well as compassion (e.g., empathy  ... 
doi:10.1007/s11747-020-00750-2 pmid:33199929 pmcid:PMC7658432 fatcat:z2ltc2wrencwflwx2hg5wb5wq4

Text or Pictures? An Eyetracking Study of How People View Digital Video Surrogates [chapter]

Anthony Hughes, Todd Wilkens, Barbara M. Wildemuth, Gary Marchionini
2003 Lecture Notes in Computer Science  
This study reports on an investigation of digital video results pages that use textual and visual surrogates.  ...  One important user-oriented facet of digital video retrieval research involves how to abstract and display digital video surrogates.  ...  Many claims have been made about the value of non-textual cues in supporting video retrieval.  ... 
doi:10.1007/3-540-45113-7_27 fatcat:pwxxd6nbmvfrtjejstl4f7yjju

Arousal, Mood, and The Mozart Effect

William Forde Thompson, E. Glenn Schellenberg, Gabriela Husain
2001 Psychological Science  
This study reports on an investigation of digital video results pages that use textual and visual surrogates.  ...  One important user-oriented facet of digital video retrieval research involves how to abstract and display digital video surrogates.  ...  Many claims have been made about the value of non-textual cues in supporting video retrieval.  ... 
doi:10.1111/1467-9280.00345 pmid:11437309 fatcat:6lyeeiaxvngd5nyskhqa7bwyge
« Previous Showing results 1 — 15 out of 3,816 results