Filters








1,469 Hits in 3.4 sec

Human movement capture and analysis in intelligent environments

Mohan M. Trivedi
2003 Machine Vision and Applications  
The spaces are monitored by multiple audio and video sensors, which can be unobtrusively embedded in the infrastructure.  ...  Our research is characterized by its emphasis on using large numbers of channels, both video and audio, to augment the precision and robustness of our algorithms.  ...  The spaces are monitored by multiple audio and video sensors, which can be unobtrusively embedded in the infrastructure.  ... 
doi:10.1007/s00138-002-0109-7 fatcat:g2vjhcf2yngnllcodoxo5hwhfa

The connector

M. Danninger, G. Flaherty, K. Bernardin, H. K. Ekenel, T. Köhler, R. Malkin, R. Stiefelhagen, A. Waibel
2005 Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05  
The Connector also uses any available multimodal interface (e.g. a speech interface to the smart phone, steerable camera-projector, targeted loudspeakers) in the smart meeting room, to deliver information  ...  to users in the most unobtrusive way possible.  ...  Figure 4 . 4 Targeted audio device (left) and a camera-projector pair (right) mounted on motorized computer-controlled pantilt units are used as interaction devices in the smart meeting room.  ... 
doi:10.1145/1088463.1088478 dblp:conf/icmi/DanningerFBEKMSW05 fatcat:omzy64iyznd7ncilrevugkxyye

Computer vision for ambient intelligence

Albert Ali Salah, Theo Gevers, Nicu Sebe, Alessandro Vinciarelli
2011 Journal of Ambient Intelligence and Smart Environments  
Computer vision is an essential part of building contextaware environments that adapt and anticipate their human users by understanding their behavior.  ...  This thematic issue explores state-of-the-art computer vision approaches for ambient intelligence applications.  ...  While audio is dominantly used in this kind of research, a great number of facial actions, postures, head and hand actions are also revealing [7] .  ... 
doi:10.3233/ais-2011-0113 fatcat:spgxpoqu5nhdvccdksvxwreipa

Kinect-Based Systems For Maritime Operation Simulators?

Girts Strazdins, Sashidharan Komandur, Arne Styve
2013 ECMS 2013 Proceedings edited by: Webjorn Rekdalsbakken, Robin T. Bye, Houxiang Zhang  
The unobtrusiveness of vision-based technologies is highly important to avoid user resistance and preserve realism of the simulation.  ...  This paper surveys different NUI technologies and advocates that vision-based systems, such as Microsoft Kinect, can provide gesture recognition with high fidelity.  ...  ACKNOWLEDGMENT The authors would like to thank Qi Xu for contribution to NUI device survey, Robert Rylander and Hans Petter Hildre for providing feedback and access to expert knowledge in maritime operation  ... 
doi:10.7148/2013-0205 dblp:conf/ecms/StrazdinsKS13 fatcat:nnztjx5m4fe6jka5o35sdjuciq

The multimodal music stand

Bo Bell, Jim Kleban, Dan Overholt, Lance Putnam, John Thompson, JoAnn Kuchera-Morin
2007 Proceedings of the 7th international conference on New interfaces for musical expression - NIME '07  
Using e-field sensing, audio analysis, and computer vision, the MMMS captures a performer's continuous expressive gestures and robustly identifies discrete cues in a musical performance.  ...  We present the Multimodal Music Stand (MMMS) for the untethered sensing of performance gestures and the interactive control of music.  ...  The authors would like to thank JoAnn Kuchera-Morin and B.S. Manjunath for their oversight in this project. Support was provided by IGERT NSF Grant# DGE-0221713.  ... 
doi:10.1145/1279740.1279750 dblp:conf/nime/BellKOPTK07 fatcat:3jhravkydngbromeiwuvqxxwbu

A framework for virtual videography

Michael L. Gleicher, Rachel M. Heck, Michael N. Wallick
2002 Proceedings of the 2nd international symposium on Smart graphics - SMARTGRAPH '02  
We continue by surveying the tools provided by computer vision and computer graphics that allow us to determine syntactic information about images.  ...  In this paper, we describe one possible way to inexpensively and unobtrusively capture and produce video in a classroom lecture environment.  ...  Acknowledgements This work was supported in part by NSF grants CCR-9984506 and IIS-0097456, Micrososft Research, and equipment donations from IBM, NVidia and Intel.  ... 
doi:10.1145/569005.569007 fatcat:djbf7ivzcbal3lq3ahfqbqmc2q

Computer vision in autism spectrum disorder research: a systematic review of published studies from 2009 to 2019

Ryan Anthony J. de Belen, Tomasz Bednarz, Arcot Sowmya, Dennis Del Favero
2020 Translational Psychiatry  
The primary objective of this systematic review is to examine how computer vision analysis has been useful in ASD diagnosis, therapy and autism research in general.  ...  The findings in this review suggest that computer vision analysis is useful for the quantification of behavioural/biological markers which can further lead to a more objective analysis in autism research  ...  Toshniwal et al. 79 proposed an assistive technology that tracks attention using mobile camera and uses haptic feedback to recapture attention.  ... 
doi:10.1038/s41398-020-01015-w pmid:32999273 fatcat:7sfld3ry2baznav5c7ztwxhxkq

Active Camera Networks and Semantic Event Databases for Intelligent Environments

Mohan M Trivedi, Ivana MikiÆ, Shailendra K Bhonsle
2002 Journal of the Institution of Electronics and Telecommunication Engineers  
We also present details of the modules associated with the control and interpretation of video information acquired by a network of cameras and a novel semantic event database for characterization and  ...  In the future, intelligent rooms, with embedded multimodal sensory systems and semantic event databases, will support effective and efficient transactions of human activities and interactions.  ...  The authors gratefully acknowledge participation and contributions of Kim Ng, Kohsia Huang, Rick Capella, Nils Lassiter, Jonathon Vance, and Sadahiro Iwamoto to the overall AVIARY research.  ... 
doi:10.1080/03772063.2002.11416289 fatcat:x4uklc47jjcnhnkaa7zgdqtnci

Page 952 of SMPTE Motion Imaging Journal Vol. 83, Issue 12 [page]

1974 SMPTE Motion Imaging Journal  
mobile and his camera unobtrusive.  ...  It was found that the automatic expo- sure control in the camera worked very well, that in-camera editing was possible without flashing any frames, and that the cameraman was indeed, as expected, very  ... 

Head-mounted eye gaze tracking devices: An overview of modern devices and recent advances

Matteo Cognolato, Manfredo Atzori, Henning Müller
2018 Journal of Rehabilitation and Assistive Technologies Engineering  
Current wearable devices allow to capture and exploit visual information unobtrusively and in real time, leading to new applications in wearable technologies that can also be used to improve rehabilitation  ...  An increasing number of wearable devices performing eye gaze tracking have been released in recent years. Such devices can lead to unprecedented opportunities in many applications.  ...  Ru¨cker at Chronos Vision, Urs Zimmermann at Usability.ch, Rasmus Petersson at Tobii and their teams for their kindness and helpfulness in providing the information used in this work.  ... 
doi:10.1177/2055668318773991 pmid:31191938 pmcid:PMC6453044 fatcat:zggcwdhrvvcmrixgxouj4bkuvq

Dynamic Context Capture and Distributed Video Arrays for Intelligent Spaces

M.M. Trivedi, K.S. Huang, I. Mikic
2005 IEEE transactions on systems, man and cybernetics. Part A. Systems and humans  
Accurate and efficient capture, analysis, and summarization of the dynamic context requires the vision system to work at multiple levels of semantic abstractions in a robust manner.  ...  Details of panoramic (omnidirectional) video camera arrays, calibration, video stream synchronization, and real-time capture/processing are discussed.  ...  ACKNOWLEDGMENT The authors would like to thank the colleagues from the Computer Vision and Robotics Research Laboratory, especially N. Lassiter, K. Ng, R. Capella, and S.  ... 
doi:10.1109/tsmca.2004.838480 fatcat:53tkybaq45c57exl6fjisvkufi

Automatic Museum Audio Guide

Noelia Vallez, Stephan Krauss, Jose Luis Espinosa-Aranda, Alain Pagani, Kasra Seirafi, Oscar Deniz
2020 Sensors  
An automatic "museum audio guide" is presented as a new type of audio guide for museums.  ...  The device consists of a headset equipped with a camera that captures exhibit pictures and the eyes of things computer vision device (EoT).  ...  The camera module provides control functionality for the camera. It provides frame rate control, auto-exposure (AE) and automatic gain control (AGC).  ... 
doi:10.3390/s20030779 pmid:32023954 pmcid:PMC7038402 fatcat:qvg5m7a7kbhzlkmaalooot675u

Automated Video Exposure Assessment of Repetitive Hand Activity Level for a Load Transfer Task

Chia-Hsiung Chen, Yu Hen Hu, Thomas Y. Yen, Robert G. Radwin
2012 Human Factors  
Application-The video assessment method for repetitive motion is promising for automatic, unobtrusive, and objective exposure assessment, which may offer broad availability using a camera enabled mobile  ...  device for helping evaluate, prevent and control exposure to repetitive motions related to upper extremity injuries in the workplace.  ...  The authors wish to thank Steven Nelms for assistance building the repetitive motion task apparatus, and for the insightful suggestions by the reviewers and Associate Editor Tom Armstrong.  ... 
doi:10.1177/0018720812458121 pmid:23691826 pmcid:PMC3979623 fatcat:xv46ntptzrax7og75kkgsagnvi

Write-it-Yourself with the Aid of Smartwatches

Syed Masum Billah, Vikas Ashok, IV Ramakrishnan
2018 Proceedings of the 2018 Conference on Human Information Interaction&Retrieval - IUI '18  
In this paper, we explore the idea of using off-the-shelf smartwatches (paired with smartphones) to assist blind people in both reading and writing paper forms including checks and receipts.  ...  Towards this, we performed a Wizard-of-Oz evaluation of different smartwatch-based interfaces that provide user-customized audio-haptic feedback in real-time, to guide blind users to different form fields  ...  The exact audio-haptic pattern for each direction can be configured using the "Haptic Controller".  ... 
doi:10.1145/3172944.3173005 pmid:30027159 pmcid:PMC6049082 dblp:conf/iui/BillahAR18 fatcat:ruqlbdamhveuvljran6yw3ojbu

A Robust Finger Tracking Method for Multimodal Wearable Computer Interfacing

S.M. Dominguez, T. Keaton, A.H. Sayed
2006 IEEE transactions on multimedia  
This paper presents a visionbased robust finger tracking algorithm combined with audio-based control commands that is integrated into a multimodal unobtrusive user interface, wherein the interface may  ...  In order to quickly extract the objects encircled by the user from a complex scene, this unobtrusive interface uses a single head-mounted camera to capture color images, which are then processed using  ...  wearable computer interface comprised of a vision-based robust finger tracking algorithm combined with simple audio-based control commands.  ... 
doi:10.1109/tmm.2006.879872 fatcat:ckc56swlwfar3gojkenwfrlnei
« Previous Showing results 1 — 15 out of 1,469 results