Filters








533,423 Hits in 4.6 sec

Visual Interfaces Designed for Searching Text Content on Mobile Devices

Todd Welch, Gregory Short, Beomjin Kim
2015 International Journal of Knowledge Engineering  
We present visualization techniques designed for presenting lengthy text-based documents on mobile devices. The system uses two different visualizations, Overview and Detail.  ...  A pilot experiment was conducted to evaluate the effectiveness of these visual interfaces compared to a traditional text-based interface.  ...  Thus, for this study, we chose mid-size text based documents that already had an index and a table of contents, in particular: student handbooks. B.  ... 
doi:10.18178/ijke.2015.1.3.040 fatcat:vh3teh2pgrgvtaz6jlqmqkpz7e

Mining Text and Visual Links to Browse TV Programs in a Web-Like Way

Xin Fan, Hisashi Miyamori, Katsumi Tanaka, Mingjing Li
2006 2006 IEEE International Conference on Multimedia and Expo  
In this paper, we use both text information from closed captions and visual information from video frames to generate links to enable one to explore not only the original video content but also augmented  ...  As the amount of recoded TV content is increasing rapidly, people need active and interactive browsing methods.  ...  Creation of text-based links In this text based processing step, a complementary information retrieval method [6] is used to find related Web content and similar topics in the TV programs.  ... 
doi:10.1109/icme.2006.262519 dblp:conf/icmcs/FanMTL06 fatcat:ckznbvplmzat5i755uj42sh5ru

Mining Text and Visual Links to Browse TV Programs in a Web-Like Way

X. FAN, H. MIYAMORI, K. TANAKA, M. LI
2007 IEICE transactions on information and systems  
In this paper, we use both text information from closed captions and visual information from video frames to generate links to enable one to explore not only the original video content but also augmented  ...  As the amount of recoded TV content is increasing rapidly, people need active and interactive browsing methods.  ...  Creation of text-based links In this text based processing step, a complementary information retrieval method [6] is used to find related Web content and similar topics in the TV programs.  ... 
doi:10.1093/ietisy/e90-d.8.1304 fatcat:ikmxt4fklvfg5nrgj2tx4ozpke

Video Browser Showdown by NUS [chapter]

Jin Yuan, Huanbo Luan, Dejun Hou, Han Zhang, Yan-Tao Zheng, Zheng-Jun Zha, Tat-Seng Chua
2012 Lecture Notes in Computer Science  
Our system integrates visual content-based, text-based and concept-based search approaches. It allows users to flexibly choose the search approaches.  ...  Moreover, two novel feedback schemes are employed: first, users can specify the temporal order in visual and conceptual inputs; second, users can label related samples with respect to visual, textual and  ...  Related Sample-Based Search While browsing the search results in the interface, users can label "related samples" with respect to three features (visual content, text, or concept).  ... 
doi:10.1007/978-3-642-27355-1_64 fatcat:vooehnaxkbh2je5evo5ploxnoy

Location Prediction of Social Images via Generative Model

Xiaoming Zhang, Zhoujun Li, Senzhang Wang, Yang Yang, Xueqiang Lv
2015 Proceedings of the 5th ACM on International Conference on Multimedia Retrieval - ICMR '15  
topic is modeled on both text vocabulary and visual feature.  ...  Most of the existing researches use the text-based or vision-based method to predict location.  ...  Usually, the text content and image visual content are highly correlated [19] and they also relate to the location.  ... 
doi:10.1145/2671188.2749308 dblp:conf/mir/ZhangLWYL15 fatcat:j2rmsmgfyzcchegs6i55ws6j7m

Multimodal Sentimental Analysis for Tweets

T. Nandhini, S. Nivetha, R. Pavithra, Dr.S.T. Veena
2020 International Journal of Recent Trends in Engineering and Research  
To tackle the challenge of how to effectively exploit the information from both visual content and textual content from image-text posts.  ...  Social media users are increasingly using both images and text to express their opinions and share their experiences, instead of only using text in the conventional social media.  ...  To tackle the challenge of analyzing both text content and visual content in image-text posts, a text-image consistency driven multimodal sentiment analysis approach is proposed in this paper.  ... 
doi:10.23883/ijrter.conf.20200315.031.nlm7o fatcat:y63ai7p2yfehpon2aeb7ypo4aa

Retrieving biomedical images through content-based learning from examples using fine granularity

Hao Jiang, Songhua Xu, Francis C. M. Lau, William W. Boonn, Brent J. Liu
2012 Medical Imaging 2012: Advanced PACS-based Imaging Informatics and Therapeutic Applications  
Such variability in image visual composition poses great challenges to content-based image retrieval methods that operate at the granularity of entire images.  ...  In this study, we explore a new content-based image retrieval algorithm that mines visual patterns of finer granularities inside a whole image to identify visual instances which can more reliably and generically  ...  In the image search phase, we also save the text content of a source webpage that contains a reference image in G(Q).  ... 
doi:10.1117/12.913765 fatcat:uhaadhvombhwncgpiyknzvxazy

The Teaching Website Design Based on Visual Communication

LinLin Nong
2017 DEStech Transactions on Engineering and Technology Research  
From the visual communication design theory as a guide, combined with the specific teaching content, resulting in creative design and intuitive demonstrated, it makes teaching the content of the website  ...  Visual communication design in teaching websites is the art of a twodimensional space, Based on principles of teaching information effectively convey, on the basis of network and multimedia technology,  ...  From the perspective of visual communication and analyzed in some, For each specific page, based on the principle of overall design, the assurance of primary and secondary, the arrangement of colors, text  ... 
doi:10.12783/dtetr/mcee2016/6415 fatcat:kyx44ax2rvaonenfdiy3s7po3u

Brand Data Gathering From Live Social Media Streams

Yue Gao, Fanglin Wang, Huanbo Luan, Tat-Seng Chua
2014 Proceedings of International Conference on Multimedia Retrieval - ICMR '14  
as visual contents as increasing number of social media posts are in multimedia form.  ...  To address these problems, we propose a multi-faceted brand tracking method that gathers relevant data based on not just evolving keywords, but also social factors (users, relations and locations) as well  ...  In our experiments, there are totally three data resources, i.e., the text-based results M t , the social context-based results M c , and the visual content-based results M v .  ... 
doi:10.1145/2578726.2578748 dblp:conf/mir/GaoWLC14 fatcat:dxsxkn7cnbdljpsitgazjdygnm

Content-enriched classifier for web video classification

Bin Cui, Ce Zhang, Gao Cong
2010 Proceeding of the 33rd international ACM SIGIR conference on Research and development in information retrieval - SIGIR '10  
Previous work shows that, in addition to text features, content features of videos are also useful for Web video classification.  ...  The main idea of our approach is to utilize the content features extracted from training data to enrich the text based semantic kernels, yielding content-enriched semantic kernels.  ...  Related Work on Text Classification Since text-based classifiers play significant roles in video classification [26, 4] , we also introduce related work on text classifica-tion.  ... 
doi:10.1145/1835449.1835553 dblp:conf/sigir/CuiZC10 fatcat:ol573uak7fc37go5kfcjlgj6se

Visual Sentiment Analysis on Social Media Data

Harshala Bhoir, K. Jayamalini
2021 International Journal of Scientific Research in Computer Science Engineering and Information Technology  
Proposed system will extract and employ an Objective Text description of images automatically extracted from the visual content rather than the classic Subjective Text provided by the user.  ...  The proposed System will extract three views visual view, subjective text view and objective text view of social media image and will give sentiment polarity positive, negative or neutral based on hypothesis  ...  Apparently, the images in images automatically extracted from the visual content rather than the classic Subjective Text provided by the users and visual view of image based on hypothesis tags for different  ... 
doi:10.32628/cseit2174101 fatcat:6kenjwhwtnez7gbjhidjmkehly

Modal Keywords, Ontologies, and Reasoning for Video Understanding [chapter]

Alejandro Jaimes, Belle L. Tseng, John R. Smith
2003 Lecture Notes in Computer Science  
Our framework consists of an expert system that uses a rule-based engine, domain knowledge, visual detectors (for objects and scenes), and metadata (text from automatic speech recognition, related text  ...  These operation results are used in our system by a rule-based engine that uses context information (e.g., text from speech) to enhance visual detection results.  ...  based on ASR text.  ... 
doi:10.1007/3-540-45113-7_25 fatcat:yup2ep6gfbahfadczhhsftcrsm

Recovering semantic relations from web pages based on visual cues

Peifeng Xiang, Yuanchun Shi
2006 Proceedings of the 11th international conference on Intelligent user interfaces - IUI '06  
This paper presents a visual cues based approach, which is tag-tree structure independent, to automatically detect such kind of semantic relations in web pages.  ...  Comparing with other existing techniques, such as DOM-based methods, this approach mostly depends on interfaces' perceptible visual information that is more reliable.  ...  It is driven by two underlying principles: Vision based. Users understand semantic relations well based on their visual perception.  ... 
doi:10.1145/1111449.1111531 dblp:conf/iui/XiangS06 fatcat:3jsjbj6b7rdynhhiurh34k3a3m

SAVE: A framework for semantic annotation of visual events

Mun Wai Lee, Asaad Hakeem, Niels Haering, Song-Chun Zhu
2008 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
This is an enabling technology for content-based video annotation, query and retrieval with applications in Internet video search and video data mining.  ...  In this paper we propose a framework that performs automatic semantic annotation of visual events (SAVE).  ...  The goal of content-based visual event retrieval is to allow queries based on specific events and event attributes in the video.  ... 
doi:10.1109/cvprw.2008.4562954 dblp:conf/cvpr/LeeHHZ08 fatcat:63q64oofnnggtbvxlgmu6eutum

What can visual content analysis do for text based image search?

Gang Hua, Qi Tian
2009 2009 IEEE International Conference on Multimedia and Expo  
To search for images, the users type in a text query and the search engines rank the result images almost sorely based on the text meta-words.  ...  Recently, we have observed several new features released in the aforementioned image search engines, especially Microsoft Live image search, which are clearly based on the analysis of the visual content  ...  CONCLUSION In this paper, we present an extensive review on what and how visual content analysis technologies can help the quality of modern text based image search engines.  ... 
doi:10.1109/icme.2009.5202783 dblp:conf/icmcs/HuaT09 fatcat:ciy6kr3o5zbgvakjcbjuvyjbze
« Previous Showing results 1 — 15 out of 533,423 results