Filters








8,920 Hits in 2.9 sec

Predicting Search User Examination with Visual Saliency

Yiqun Liu, Zeyang Liu, Ke Zhou, Meng Wang, Huanbo Luan, Chao Wang, Min Zhang, Shaoping Ma
2016 Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval - SIGIR '16  
Visual saliency, which is designed to measure the likelihood of a given area to attract human visual attention, is used to predict users' attention distribution on heterogenous search components.  ...  Predicting users' examination of search results is one of the key concerns in Web search related studies.  ...  to predict user examination with visual saliency information on SERPs.  ... 
doi:10.1145/2911451.2911517 dblp:conf/sigir/LiuLZWLWZM16 fatcat:6t6zqhf6sbd3tcmsrcq4kayxcq

Mobile Interface Attentional Priority Model

Jeremiah D. Still, John M. Hicks
2020 SN Computer Science  
We examined the predictive performance of a saliency model, compared to the mobile-specific convention map.  ...  Broadly, searches are guided by a combination of visual salience and previous experiences.  ...  Conclusion User searches are guided by a combination of visual salience and previous experiences. Our goals were to help make the implicit influences visible to designers.  ... 
doi:10.1007/s42979-020-00166-3 fatcat:rcpkwuh4azerxdeqntt2eidjwe

Investigating Examination Behavior of Image Search Users

Xiaohui Xie, Yiqun Liu, Xiaochuan Wang, Meng Wang, Zhijing Wu, Yingying Wu, Min Zhang, Shaoping Ma
2017 Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR '17  
We predict users' examination behavior with di erent impact factors. Results show that combining position and visual content features can improve prediction in image searches.  ...  , the content of image results (e.g., visual saliency) a ects examination behavior, and (3) some popular behavior assumptions in general Web search (e.g., examination hypothesis) do not hold in image search  ...  In [18] , visual saliency features are demonstrated to signi cantly improve the success of examination prediction.  ... 
doi:10.1145/3077136.3080799 dblp:conf/sigir/XieLWWWWZM17 fatcat:vakyaklybbfobazkly52z7xsai

Inferring Searcher Attention by Jointly Modeling User Interactions and Content Salience

Dmitry Lagun, Eugene Agichtein
2015 Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR '15  
To our knowledge, our model is the first to effectively combine user interaction data with visual prominence, or salience, of the page content elements.  ...  This problem is exacerbated when moving beyond the traditional search result pages to other domains, where high diversity of content and visual presentation often affect how users examine a page.  ...  In addition, we aim to combine Web page content salience with user interactions on the page. More recently, reference [35] predicted aggregated salience of Web pages based on visual content alone.  ... 
doi:10.1145/2766462.2767745 dblp:conf/sigir/LagunA15 fatcat:d5qqtmfssnb5jitsw6h4dkxkya

Looking Into Saliency Model via Space-Time Visualization

Haoran Liang, Ronghua Liang, Guodao Sun
2016 IEEE transactions on multimedia  
By using a space-time cube visualization in combination with clustering, the dynamic stimuli and associated eye gazes as well as the attention maps from saliency models can be analyzed in a static three-dimensional  ...  We introduce a visual analytics method to analyze eye-tracking data and saliency models for dynamic stimuli, such as video or animated graphics.  ...  Users could click any single frame to examine the detailed difference of the predict saliency map and the ground truth. V.  ... 
doi:10.1109/tmm.2016.2613681 fatcat:pnxdxc52ynbpfh5uyw65kca2ey

Feature congestion

Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, Zhenlan Jin
2005 Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '05  
Management of clutter is an important factor in the design of user interfaces and information visualizations, allowing improved usability and aesthetics.  ...  This measure is based upon extensive modeling of the saliency of elements of a display, and upon a new operational definition of clutter.  ...  Researchers have developed a number of predictive models of visual search [ 30, 17, 12, 22] .  ... 
doi:10.1145/1054972.1055078 dblp:conf/chi/RosenholtzLMJ05 fatcat:abql43on7nccljl3qkp5x2dtuu

Memory for found targets interferes with subsequent performance in multiple-target visual search

Matthew S. Cain, Stephen R. Mitroff
2013 Journal of Experimental Psychology: Human Perception and Performance  
However, replacing found targets with random distractor items did not improve subsequent search accuracy.  ...  Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory.  ...  From a cognitive psychology perspective, multiple-target visual search provides a means to examine questions that single-target searches cannot address: Multiple-target searches are more intricate than  ... 
doi:10.1037/a0030726 pmid:23163788 fatcat:2o4imrfihzcnzck4ho7foultqy

One Explanation is Not Enough: Structured Attention Graphs for Image Classification [article]

Vivswan Shitole, Li Fuxin, Minsuk Kahng, Prasad Tadepalli, Alan Fern
2021 arXiv   pre-print
We propose an approach to compute SAGs and a visualization for SAGs so that deeper insight can be gained into a classifier's decisions.  ...  Our results show that the users are more correct when answering comparative counterfactual questions based on SAGs compared to the baselines.  ...  Are Figure 1 : An image (a) predicted as Goldfinch with two saliency maps (b) and (c) obtained from different approaches as explanations for the classifier's (VGGNet [27] ) prediction.  ... 
arXiv:2011.06733v4 fatcat:4x4bnd7sijefbla7dmsea447dy

The role of highlighting in visual search through maps

Christopher Wickens, Amy Alexander, Marieke Martens, Michael Ambinder
2004 Spatial Vision  
These results are also discussed in conjunction with a computational model of the effects of discriminability and salience on performance in a cluttered display with variable intensity codings used to  ...  visually segregate different domains of information.  ...  , 2001) , in a manner consistent with serial models of visual search.  ... 
doi:10.1163/1568568041920195 pmid:15559110 fatcat:6sudnzkgv5awzdw2p7azfdrjam

Page 4622 of Psychological Abstracts Vol. 91, Issue 12 [page]

2004 Psychological Abstracts  
The visual saliency algorithm al- lows us to dynamically maintain a model of the evolving visual context.  ...  —We present a collaborative recommender that uses a user-based model to predict user ratings for specified items.  ... 

Do predictions of visual perception aid design?

Ruth Rosenholtz, Amal Dorai, Rosalind Freeman
2011 ACM Transactions on Applied Perception  
of the design of usable user interfaces and information visualizations.  ...  However, overall "goodness" values were not very useful, showed signs of interfering with a natural process of trading off perceptual vs. other design issues, and would likely interfere with acceptance  ...  In all cases, we provided users with an explanation of the visual behavior predicted by our saliency model and clutter measure.  ... 
doi:10.1145/1870076.1870080 fatcat:wv2fmrblrjb6tmdeijwop5oi64

Adaptive picture-in-picture technology based on visual saliency

Shijian Lu, Byung-Uck Kim, Nicolas Lomenie, Joo-Hwee Lim
2013 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference  
We propose an automatic and adaptive PiP technology that makes use of computational modeling of visual saliency.  ...  This process is however not user-friendly as it involves a manual process and once specified, the size and the location of the sub-program will be fixed even when they block some key visual information  ...  compression, visual search, etc  ... 
doi:10.1109/apsipa.2013.6694202 dblp:conf/apsipa/LuKLL13 fatcat:4za3hkkmjvc55f64qgq5oytx6a

Decisions about objects in real-world scenes are influenced by visual saliency before and during their inspection

Geoffrey Underwood, Katherine Humphrey, Editha van Loon
2011 Vision Research  
Evidence from eye-tracking experiments has provided mixed support for saliency map models of inspection, with the task set for the viewer accounting for some of the discrepancies between predictions and  ...  Given that the vehicles were invariably inspected, this may relate to the high incidence of "looked-but-failed-to-see" crashes involving motorcycles and to prevalence effects in visual search.  ...  can predict minimal effects of saliency.  ... 
doi:10.1016/j.visres.2011.07.020 pmid:21820003 fatcat:gh7wllql25eqdpn667lo37jpra

Understanding Visual Saliency in Mobile User Interfaces

Luis A. Leiva, Yunfei Xue, Avya Bansal, Hamed R. Tavakoli, Tuðçe Köroðlu, Jingzhou Du, Niraj R. Dayama, Antti Oulasvirta
2020 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services  
For graphical user interface (UI) design, it is important to understand what attracts visual attention.  ...  We also release the first annotated dataset for investigating visual saliency in mobile UIs.  ...  ACKNOWLEDGMENTS We thank Marko Repo for his help with UI element annotation and the anonymous referees for their feedback.  ... 
doi:10.1145/3379503.3403557 dblp:conf/mhci/LeivaXBTKDDO20 fatcat:g4nfomgleze2bcxqvzab6dt66a

Vignette: Perceptual Compression for Video Storage and Processing Systems [article]

Amrita Mazumdar, Brandon Haynes, Magdalena Balazinska, Luis Ceze, Alvin Cheung, Mark Oskin
2019 arXiv   pre-print
Vignette's saliency-based optimizations reduce storage by up to 95% with minimal quality loss, and Vignette videos lead to power savings of 50% on mobile phones during video playback.  ...  Vignette's compression technique uses a neural network to predict saliency information used during transcoding, and its storage manager integrates perceptual information into the video storage system to  ...  MLNet uses Keras with Theano [8, 52] to perform saliency prediction from video frames.  ... 
arXiv:1902.01372v1 fatcat:zf67dkfsjndxhjlybur2lidvcq
« Previous Showing results 1 — 15 out of 8,920 results