4,011 Hits in 4.3 sec

Eliciting Perceptual Ground Truth for Image Segmentation [chapter]

Victoria Hodge, Garry Hollier, John Eakins, Jim Austin
2006 Lecture Notes in Computer Science  
In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies.  ...  These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for human decomposition of figurative images.  ...  These ground truth images may be further analysed to elicit statistics and preference scores regarding the decomposition preferences of humans: i.e., which decomposition is generally preferred for each  ... 
doi:10.1007/11788034_33 fatcat:75xfwvv7x5f55abg4d5ytvqzbu

Boundary Detection Benchmarking: Beyond F-Measures

Xiaodi Hou, Alan Yuille, Christof Koch
2013 2013 IEEE Conference on Computer Vision and Pattern Recognition  
For an ill-posed problem like boundary detection, human labeled datasets play a critical role.  ...  Finally, we assess the performances of 9 major algorithms on different ways of utilizing the dataset, suggesting new directions for improvements.  ...  Acknowledgments The first author would like to thank Zhuowen Tu, Yin Li and Liwei Wang for their thoughtful discussions.  ... 
doi:10.1109/cvpr.2013.276 dblp:conf/cvpr/HouYK13 fatcat:dg72w36haran3icn3h3g55pyp4

Inducing a perceptual relevance shape classifier

Victoria J. Hodge, John Eakins, James Austin
2007 Proceedings of the 6th ACM international conference on Image and video retrieval - CIVR '07  
We previously investigated human visual perception of trademark images and established a body of ground truth data in the form of trademark images and their respective human segmentations.  ...  The work indicated that there is a core set of segmentations for each image that people perceive.  ...  This core set of segmentations forms the ground truth for our evaluations into inducing a perceptual relevance classifier.  ... 
doi:10.1145/1282280.1282306 dblp:conf/civr/HodgeEA07 fatcat:lo7av235qzhuvj77y7t7g6vzkm

OpenEDS2020: Open Eyes Dataset [article]

Cristina Palmero, Abhishek Sharma, Karsten Behrendt, Kapil Krishnakumar, Oleg V. Komogortsev, Sachin S. Talathi
2020 arXiv   pre-print
over union score of 84.1% for semantic segmentation.  ...  approaches; and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz, with up to 29,500 images, of which 5% contain a semantic segmentation label, devised to encourage the use of temporal  ...  Annotations Ground truth 3D gaze vectors are provided per each eye image.  ... 
arXiv:2005.03876v1 fatcat:vzws3c3ddrfhvbfqgfwuob6f6q

Learning Data Augmentation for Brain Tumor Segmentation with Coarse-to-Fine Generative Adversarial Networks [article]

Tony C.W Mok, Albert C.S Chung
2018 arXiv   pre-print
Also, our proposed method successfully boosts a common segmentation network to reach the state-of-the-art performance on the BRATS15 Challenge.  ...  In our experiments, we show the efficacy of our approach on a Magnetic Resonance Imaging (MRI) image, achieving improvements of 3.5% Dice coefficient on the BRATS15 Challenge dataset as compared to traditional  ...  L b refers to the mean-squareerror loss for the boundary extraction task. x n,i and y n,i are the i-th pixel and ground truth in the n-th image used for training, respectively.  ... 
arXiv:1805.11291v2 fatcat:ikvianu6jzduxezi5yubsanyym

Super-Fine Attributes with Crowd Prototyping

Daniel Martinho-Corbishley, Mark Nixon, John N. Carter
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Crowd prototyping facilitates efficient crowdsourcing of super-fine labels by pre-discovering salient perceptual concepts for prototype matching.  ...  However, most works assume coarse, expertly-defined categories, ineffective in describing challenging images.  ...  They are essential for communicating eye-witness testimony in forensic investigations and when biological ground-truths are unknown.  ... 
doi:10.1109/tpami.2018.2836900 pmid:29994759 fatcat:z7cf52y4jrdmnl7fke5yvekshe

Empirical validation of directed functional connectivity [article]

Ravi D Mill, Anto Bagic, Walter Schneider, Michael W Cole
2016 bioRxiv   pre-print
Such simulations rely on many generative assumptions, and we hence utilized a different strategy involving empirical data in which a ground truth directed connectivity pattern could be anticipated with  ...  However, a host of methodological uncertainties have impeded the application of directed connectivity methods, which have primarily been validated via 'ground truth' connectivity patterns embedded in simulated  ...  The MEG IMAGES analyses also recovered the ground truth pattern.  ... 
doi:10.1101/070979 fatcat:2pxqvzyyjzfklg6zsqxlj57jtu

The Veiled Virgin illustrates visual segmentation of shape by cause

Flip Phillips, Roland W. Fleming
2020 Proceedings of the National Academy of Sciences of the United States of America  
Three-dimensional scans of the objects with and without the textile provided ground-truth measures of the true physical surface reliefs, against which observers' judgments could be compared.  ...  It is crucial for many tasks, from object recognition to tool use, and yet how the brain represents shape remains poorly understood.  ...  We further thank Tom Eckert of Arizona State University, Wanita Bates at Presentation Sisters, Betteanne Seabase, George Chakalos, Leah Kramberg, and Maria Muttergottes for their assistance and advice.  ... 
doi:10.1073/pnas.1917565117 pmid:32414926 fatcat:kzx5rh4rhbewzae2c7bjbabw7u

OpenEDS2020 Challenge on Gaze Tracking for VR: Dataset and Results

Cristina Palmero, Abhishek Sharma, Karsten Behrendt, Kapil Krishnakumar, Oleg V. Komogortsev, Sachin S. Talathi
2021 Sensors  
The dataset, which we make publicly available for the research community, consists of 87 subjects performing several gaze-elicited tasks, and is divided into 2 subsets, one for each competition task.  ...  , and (2) Sparse Temporal Semantic Segmentation Challenge, with the goal of using temporal information to propagate semantic eye labels to contiguous eye image frames.  ...  Acknowledgments: We gratefully thank Robert Cavin and Facebook Reality Labs Research for supporting this work.  ... 
doi:10.3390/s21144769 fatcat:dcehxnvvv5ax5ioch3wktjhi2m

Perceptive agents and systems in virtual reality

Demetri Terzopoulos
2003 Proceedings of the ACM symposium on Virtual reality software and technology - VRST '03  
The observer soldier obtains its perceptual information by continually processing the image streams acquired by its foveated virtual eyes.  ...  In an effort to liberate a substantial segment of the computer vision research community from the "tyranny of hardware", we have proposed an alternative, software-based research methodology that relies  ...  Our approach offers additional crucial advantages: • Exact ground truth data.  ... 
doi:10.1145/1008653.1008655 dblp:conf/vrst/Terzopoulos03 fatcat:jkkvwzeqdrdmdffb5lpmmattyi

Empirical validation of directed functional connectivity

Ravi D. Mill, Anto Bagic, Andreea Bostan, Walter Schneider, Michael W. Cole
2017 NeuroImage  
Such simulations rely on many generative assumptions, and we hence utilized a different strategy involving empirical data in which a ground truth directed connectivity pattern could be anticipated with  ...  However, a host of methodological uncertainties have impeded the application of directed connectivity methods, which have primarily been validated via "ground truth" connectivity patterns embedded in simulated  ...  Acknowledgments We would like to thank Stephen Hanson, Catherine Hanson, Dana Mastrovito, Mark Wheeler, and Robert Kass for helpful feedback.  ... 
doi:10.1016/j.neuroimage.2016.11.037 pmid:27856312 pmcid:PMC5321749 fatcat:l56jxk6g5fesbb4cczzzw7b3au

Salient Region Extraction based on Global Contrast Enhancement and Saliency Cut for Image Information Recognition of the Visually Impaired

2018 KSII Transactions on Internet and Information Systems  
In this study, a novel method is proposed for extracting salient regions based on global contrast enhancement and saliency cuts in order to improve the process of recognizing images for the visually impaired  ...  Extracting key visual information from images containing natural scene is a challenging task and an important step for the visually impaired to recognize information based on tactile graphics.  ...  Top row: input images; second row: RC [2] method; third row: our method; and bottom row: ground truth images. Fig. 5 . 5 Experimental results of outer contour generation.  ... 
doi:10.3837/tiis.2018.05.021 fatcat:hpqx2rblvrce5frntdyvskgcom

Evaluation of video artifact perception using event-related potentials

Lea Lindemann, Stephan Wenger, Marcus Magnor
2011 Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization - APGV '11  
When new computer graphics algorithms for image and video editing, rendering or compression are developed, the quality of the results has to be evaluated and compared.  ...  In this paper we show that artifacts appearing in videos elicit a measurable brain response which can be analyzed using the event-related potentials technique.  ...  It shows that ground truth videos do not elicit a response, unlike videos with artifacts.  ... 
doi:10.1145/2077451.2077461 dblp:conf/apgv/LindemannWM11 fatcat:2pqklzg5lnf4zakgtc2ouqezx4

A survey of perceptual image processing methods

A. Beghdadi, M.-C. Larabi, A. Bouzerdoum, K.M. Iftekharuddin
2013 Signal processing. Image communication  
This paper presents an overview of perceptual based approaches for image enhancement, segmentation and coding.  ...  Therefore, for each topic, we identify the main contributions of perceptual approaches and their limitations, and outline how perceptual vision has inuenced current state-of-the-art techniques in image  ...  The most intuitive and popular approaches are based on the a priori knowledge of the segmentation results or the ground truth. Unfortunately, in many applications the ground truth is not available.  ... 
doi:10.1016/j.image.2013.06.003 fatcat:maiih2fhvjdb3aq6lkb23ltfiy

Symmetry reCAPTCHA

Christopher Funk, Yanxi Liu
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We demonstrate statistically significant outcomes for using symmetry perception as a powerful, alternative, image-based reCAPTCHA.  ...  Using a set of ground-truth symmetries automatically generated from noisy human labels, the effectiveness of our work is evidenced by a separate test where over 96% success rate is achieved.  ...  Ground Truth Extraction The ground truths (GTs) rotation or reflection symmetry in an image is computed from a consensus of human labels based on each rater's perception of a real-world symmetry in that  ... 
doi:10.1109/cvpr.2016.558 dblp:conf/cvpr/FunkL16 fatcat:dambj246ubeudfyxpufpaz74xu
« Previous Showing results 1 — 15 out of 4,011 results