Filters








42 Hits in 5.4 sec

Introduction to the special issue: Egocentric Vision and Lifelogging

Mariella Dimiccoli, Cathal Gurrin, David Crandall, Xavier Giró-i-Nieto, Petia Radeva
2018 Journal of Visual Communication and Image Representation  
The papers of this Special Issue provide a snapshot of the current state of the art in egocentric vision and lifelogging.  ...  , aiming to extract valuable semantic information from huge volumes of imagery The goal of this special issue is to present recent developments and appli-15 cations of egocentric vision and lifelogging  ... 
doi:10.1016/j.jvcir.2018.06.010 fatcat:rhrhgajxhfa6np4a4ufmjg4o5a

Overview of Lifelogging: Current Challenges and Advances

Amel Ksibi, Ala Saleh Alluhaidan, Amina Salhi, Sahar A. El-Rahman
2021 IEEE Access  
A framework specialized for dementia care based on lifelogging monitoring with activity recognition from egocentric vision and semantic context-enrichment is presented in [8] .  ...  The number of security issues is encountered in the lifelogging trend that needs to be determined and solved to proceed into the future connected world.  ... 
doi:10.1109/access.2021.3073469 fatcat:qm32iyh77jc43dbux6ftauhtpu

Serious Games Application for Memory Training Using Egocentric Images [article]

Gabriel Oliveira-Barra and Marc Bolaños and Estefania Talavera and Adrián Dueñas and Olga Gelonch and Maite Garolera
2017 arXiv   pre-print
To do so, we introduce a novel computer vision technique that classifies rich and non rich egocentric images and uses them in serious games.  ...  In this work, we address the use of lifelogging as a tool to obtain pictures from a patient's daily life from an egocentric point of view.  ...  SGR 1219, CERCA, ICREA Academia 2014, Grant 20141510 (Marató TV3) and Grant FPU15/01347. The funders had no role in the study design, data collection, analysis, and preparation of the manuscript.  ... 
arXiv:1707.08821v1 fatcat:gwjj5ht53naspj7af5a7wii5gy

Ten Questions in Lifelog Mining and Information Recall [article]

An-Zi Yen, Hen-Hsen Huang, Hsin-Hsi Chen
2020 arXiv   pre-print
The main challenge is how to store and manage personal knowledge from various sources.  ...  With the advance of science and technology, people are used to record their daily life events via writing blogs, uploading social media posts, taking photos, or filming videos.  ...  Introduction The concept of lifelogging was first introduced in the proposal of Memex [Bush, 1945] , a hypothetical system allowing a person to store all the knowledge collected in her/his lifetime and  ... 
arXiv:2005.01535v1 fatcat:fjvyzlsui5f6tobytbyh5fcqbi

EgoVQA - An Egocentric Video Question Answering Benchmark Dataset

Chenyou Fan
2019 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)  
To address this issue, we collected a novel egocentric VideoQA dataset called EgoVQA with 600 question-answer pairs with visual contents across 5,000 frames from 16 first-person videos.  ...  A typical meaningful scenario is an intelligent agent provides assistance to handicapped people to perceive the environment by the queries, localize objects and persons based on descriptions, and identify  ...  Therefore, we expect that egocentric VideoQA requires special feature and attention designs to suit for its special needs.  ... 
doi:10.1109/iccvw.2019.00536 dblp:conf/iccvw/Fan19 fatcat:qnp2d6cwyrhgnnrm3dkgsd7fxq

A Test Collection for Interactive Lifelog Retrieval [chapter]

Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Bernd Munzer, Rami Albatal, Frank Hopfgartner, Liting Zhou, Duc-Tien Dang-Nguyen
2018 Msphere  
We describe the features of the dataset and we report on the outcome of the first Lifelog Search Challenge (LSC), which used the dataset in an interactive competition at ACM ICMR 2018.  ...  However, thus far, no shared test collection exists that has been designed to support interactive lifelog retrieval.  ...  Acknowledgements We acknowledge the financial support of Science Foundation Ireland (SFI) under grant number SFI/12/RC/2289 and JSPS KAKENHI undfer Grant Number 18H00974.  ... 
doi:10.1007/978-3-030-05710-7_26 fatcat:uykue4nlvjbjjg5ow442atzl2q

Summarization of Egocentric Videos: A Comprehensive Survey

Ana Garcia del Molino, Cheston Tan, Joo-Hwee Lim, Ah-Hwee Tan
2016 IEEE Transactions on Human-Machine Systems  
This increasing flow of first-person video has led to a growing need for automatic video-summarization adapted to the characteristics and applications of egocentric video.  ...  The introduction of wearable video cameras (e.g. GoPro) in the consumer market has promoted video life-logging, motivating users to generate large amounts of video data.  ...  This issue has been addressed in different ways since Lifelogging (the practice of continuously capturing and recording images and videos of one's life) was first introduced.  ... 
doi:10.1109/thms.2016.2623480 fatcat:nerpk2cb55gebkqfqkdmrjvlki

Enhanced Self-Perception in Mixed Reality: Egocentric Arm Segmentation and Database with Automatic Labelling [article]

Ester Gonzalez-Sosa, Pablo Perez, Ruben Tolosana, Redouane Kachach, Alvaro Villegas
2020 arXiv   pre-print
In this study, we focus on the egocentric segmentation of arms to improve self-perception in Augmented Virtuality (AV).  ...  We provide all details required for the automated generation of groundtruth and semi-synthetic images; iii) the use of deep learning for the first time for segmenting arms in AV; iv) to showcase the usefulness  ...  Indeed, egocentric vision has the advantage that the objects tend to appear at the center of the image, but also the challenge of the camera moving with the human body, which creates fast movements and  ... 
arXiv:2003.12352v1 fatcat:gfnhmhhijjhn7d66fzmsdimj7u

Enhanced Self-Perception in Mixed Reality: Egocentric Arm Segmentation and Database with Automatic Labeling

Ester Gonzalez-Sosa, Pablo Perez, Ruben Tolosana, Redouane Kachach, Alvaro Villegas
2020 IEEE Access  
and qualitative evaluation to showcase the usefulness of the deep network and EgoArm dataset, reporting results on different real egocentric hand datasets, including GTEA Gaze+, EDSH, EgoHands, Ego Youtube  ...  In this study, we focus on the egocentric segmentation of arms to improve self-perception in Augmented Virtuality (AV).  ...  INTRODUCTION Most computer vision applications are traditionally focused on third-person view (TPV) actions that happen while interacting directly or indirectly with a camera [6] .  ... 
doi:10.1109/access.2020.3013016 fatcat:3tigmijzqrfi3bku3l5lr63wiu

Data augmentation techniques for the Video Question Answering task [article]

Alex Falcon, Oswald Lanz, Giuseppe Serra
2020 arXiv   pre-print
Video Question Answering (VideoQA) is a task that requires a model to analyze and understand both the visual content given by the input video and the textual part given by the question, and the interaction  ...  the social assistance and the industrial training.  ...  Conclusion Egocentric VideoQA is a task introduced recently in [6] which specializes the VideoQA task in an egocentric setting.  ... 
arXiv:2008.09849v1 fatcat:ccdbgfqqwzhadehdxizc3jg3uq

Predicting Important Objects for Egocentric Video Summarization

Yong Jae Lee, Kristen Grauman
2015 International Journal of Computer Vision  
To accomplish this, we develop region cues indicative of high-level saliency in egocentric video---such as the nearness to hands, gaze, and frequency of occurrence---and learn a regressor to predict the  ...  Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.  ...  The former two contrast our method with existing techniques that target the generic video summarization problem, highlighting the need to specialize to egocentric data as we propose.  ... 
doi:10.1007/s11263-014-0794-5 fatcat:6cbcxftomjalhpn3bn5lykuvv4

Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation

Takehiko Ohkawa, Takuma Yagi, Atsushi Hashimoto, Yoshitaka Ushiku, Yoichi Sato
2021 IEEE Access  
To resolve the domain shift that the stylization has not addressed, we apply careful pseudo-labeling by taking a consensus between the models trained on the source and stylized source images.  ...  Hand segmentation is a crucial task in first-person vision.  ...  INTRODUCTION Mobile cameras have become popular thanks to advances in photography, and a massive number of videos are recorded nowadays.  ... 
doi:10.1109/access.2021.3094052 fatcat:nelc65iu7bfc7nsult3rnjttze

STAC: Spatial-Temporal Attention on Compensation Information for Activity Recognition in FPV

Yue Zhang, Shengli Sun, Linjian Lei, Huikai Liu, Hui Xie
2021 Sensors  
Egocentric activity recognition in first-person video (FPV) requires fine-grained matching of the camera wearer's action and the objects being operated.  ...  It achieved state-of-the-art performance on two egocentric datasets.  ...  Acknowledgments: The authors would like to acknowledge the Georgia Institute of Technology for making their Egocentric Activity Datasets available.  ... 
doi:10.3390/s21041106 pmid:33562612 pmcid:PMC7914484 fatcat:ktcxuueywzb43fan7tra3cdi6i

Hierarchical Hidden Markov Model in detecting activities of daily living in wearable videos for studies of dementia

Svebor Karaman, Jenny Benois-Pineau, Vladislavs Dovgalecs, Rémi Mégret, Julien Pinquier, Régine André-Obrecht, Yann Gaëstel, Jean-François Dartigues
2012 Multimedia tools and applications  
The videos may last up to two hours, therefore a tool for an efficient navigation in terms of activities of interest is crucial for the doctors.  ...  In the context of dementia diagnosis by doctors, the videos are recorded at patients' houses and later visualized by the medical practitioners.  ...  Acknowledgements This work is partly supported by a grant from the ANR (Agence Nationale de la Recherche) with reference ANR-09-BLAN-0165-02, within the IMMED project. References  ... 
doi:10.1007/s11042-012-1117-x fatcat:iilquwk3yffzjoal2qnnq4jf2e

EGO-CH: Dataset and Fundamental Tasks for Visitors Behavioral Understanding using Egocentric Vision

Francesco Ragusa, Antonino Furnari, Sebastiano Battiato, Giovanni Signorello, Giovanni Maria Farinella
2019 Pattern Recognition Letters  
Moreover, egocentric video can be processed using computer vision and machine learning to enable an automated analysis of visitors' behavior.  ...  To address this issue, in this paper we propose EGOcentric-Cultural Heritage (EGO-CH), the first dataset of egocentric videos for visitors' behavior understanding in cultural sites.  ...  To address this issue, in this paper we propose EGOcentric-Cultural Heritage (EGO-CH), the first large dataset of egocentric videos for visitors behavioral understanding in cultural sites.  ... 
doi:10.1016/j.patrec.2019.12.016 fatcat:kkyvdyiswvfajglcwi7hqct7im
« Previous Showing results 1 — 15 out of 42 results