A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Interactive Line Drawing Recognition and Vectorization with Commodity Camera
2014
Proceedings of the ACM International Conference on Multimedia - MM '14
This paper presents a novel method that interactively recognizes and vectorizes hand-drawn strokes in front of a commodity webcam. ...
By this, we can avoid various stroke recognition ambiguities, enhance the vectorization quality, and recover the stroke drawing order. ...
This work is funded in part by MOE Tier-2 grant (MOE2011-T2-2-041 (ARC 5/12)) and MOE Tier-1 grant (RG 29/11). ...
doi:10.1145/2647868.2654939
dblp:conf/mm/JayaramanF14
fatcat:kjnon4pfejagbozhxn2jt5dh5m
Digital Facial Augmentation for Interactive Entertainment
2015
Proceedings of the 7th International Conference on Intelligent Technologies for Interactive Entertainment
Leveraging the capabilities of reasonably accurate object tracking using commodity cameras and/or depth sensors to determine the 3D position and pose of objects in real time, it is possible to project ...
Similarly, integrating 2D rigid-body, fluid and gravity simulation, one may interact with virtual objects projected on their own face or body. ...
Akihiko Shirai of the Kanagawa Institute of Technology for valuable advice, and Mr. Daniel Biléu for designing the tattoo-like pattern. ...
doi:10.4108/icst.intetain.2015.259444
dblp:journals/eeel/HiedaC15
fatcat:tgske7mwdbfvjkekntilo4kzqa
Control Your Home with a Smartwatch
2020
IEEE Access
INDEX TERMS Smart watch, MEMS sensor, action recognition, human-computer interaction. ...
and effective interaction using low-cost smartwatches, and the interaction accuracy is >87%. ...
Take a group of drawing circle test data for display. The tester draws a circle arbitrarily in space with his watch. Figure 8 is the map corresponding to the trajectory line of elbow and wrist. ...
doi:10.1109/access.2020.3007328
fatcat:2vvxgfvysbei5oybpjtr6r2b2i
WiDraw
2015
Proceedings of the 21st Annual International Conference on Mobile Computing and Networking - MobiCom '15
We use WiDraw to implement an in-air handwriting application that allows the user to draw letters, words, and sentences, and achieves a mean word recognition accuracy of 91%. ...
Our software prototype using commodity wireless cards can track the user's hand with a median error lower than 5 cm. ...
Recent works use depth cameras (e.g., [1] ) or infrared cameras (e.g., [2, 19] ) to enable in-air 3D human computer interactions. ...
doi:10.1145/2789168.2790129
dblp:conf/mobicom/SunSKK15
fatcat:quwuvjcmozdo7gx5iqzkqfikgm
A method for image-based shadow interaction with virtual objects
2015
Journal of Computational Design and Engineering
The gesture recognition method is based on the screen image obtained by a single web camera. ...
In this paper, a vision-based shadow gesture recognition method is proposed for interactive projection systems. ...
Acknowledgments The research is supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean Ministry of Education, Science and Technology ...
doi:10.1016/j.jcde.2014.11.003
fatcat:m4ha6vdhzfertkgidwurx673yu
Augmented reality virtual glasses try-on technology based on iOS platform
2018
EURASIP Journal on Image and Video Processing
Face information was collected by the input device-monocular camera. After face detection by SVM classifier, the local face features were extracted by robust SIFT. ...
Combined with SDM, the feature points were iteratively solved to obtain more accurate feature point alignment model. ...
Thanks to the editor and reviewers.
Funding The paper is subsidized by science and technology key project of Henan Province, China. NO.172102210462 ...
doi:10.1186/s13640-018-0373-8
fatcat:6u2zmhqn2zgkfl7wfmgjb33yg4
3DTouch: Towards a Wearable 3D Input Device for 3D Applications
[article]
2017
arXiv
pre-print
We present 3DTouch, a novel 3D wearable input device worn on the fingertip for interacting with 3D applications. 3DTouch is self-contained, and designed to universally work on various 3D platforms. ...
Moreover, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices such as Kinect. ...
Users can draw a curve line in the VE by simply performing touch interaction on the curve of a physical object. ...
arXiv:1706.00176v1
fatcat:le6e5sxcfrccpewr2qfj3pfwka
PlayAnywhere
2005
Proceedings of the 18th annual ACM symposium on User interface software and technology - UIST '05
We introduce PlayAnywhere, a front-projected computer vision-based interactive table system which uses a new commercially available projection technology to obtain a compact, self-contained form factor ...
PlayAnywhere's configuration addresses installation, calibration, and portability issues that are typical of most vision-based table systems, and thereby is particularly motivated in consumer applications ...
commodity hardware with all features enabled, no GPU use but high CPU consumption). • Most commodity camera systems acquire images at only 30Hz, which is not fast enough to support certain kinds of high ...
doi:10.1145/1095034.1095047
dblp:conf/uist/Wilson05
fatcat:b6p45smttfgz7p6ffuwty74ep4
In touch with the remote world
2014
Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology - VRST '14
In this paper, we present a touchscreen interface for creating freehand drawings as worldstabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live ...
We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings ...
This work was supported by NSF grants IIS-1219261 and IIS-0747520, and ONR grant N00014-14-1-0133. ...
doi:10.1145/2671015.2671016
dblp:conf/vrst/GauglitzNTH14
fatcat:wgrgtugtfrbwbdggl2a2pxjqeq
3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit
[article]
2015
arXiv
pre-print
On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. ...
We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. ...
Jerry Hamann, George Janack, and Dr. Steven Barrett for their invaluable advice and help with the electrical aspects of this project. ...
arXiv:1406.5581v2
fatcat:wmq6aqvanfgsfmitmmcnoqi3qm
CaMeRa: A Computational Model of Multiple Representations
1997
Cognitive Science
Data in the experimental literature and concurrent verbal protocols were used to guide construction of a linked production system and pamllel network, CaMeRa (Computation with Multiple Representations) ...
CaMeRa, like the expert, uses the diagrammatic and verbal representations to complement one another, thus exploiting the unique advantages of each. ...
CaMeRa uses the external display as the expert does, for drawing, reasoning, recognition, and input to STM. ...
doi:10.1207/s15516709cog2103_3
fatcat:5vlvesxiuzbrzdlfd46zsxxkd4
CaMeRa: A computational model of multiple representations
1997
Cognitive Science
Data in the experimental literature and concurrent verbal protocols were used to guide construction of a linked production system and pamllel network, CaMeRa (Computation with Multiple Representations) ...
CaMeRa, like the expert, uses the diagrammatic and verbal representations to complement one another, thus exploiting the unique advantages of each. ...
CaMeRa uses the external display as the expert does, for drawing, reasoning, recognition, and input to STM. ...
doi:10.1016/s0364-0213(99)80026-3
fatcat:kpdqd5wy3jd3jmffukipcjc6da
Grassmannian Representation of Motion Depth for 3D Human Gesture and Action Recognition
2014
2014 22nd International Conference on Pattern Recognition
Recently developed commodity depth sensors open up new possibilities of dealing with rich descriptors, which capture geometrical features of the observed scene. ...
Results reveal that our approach outperforms the state-of-theart methods, with accuracy of 98.21% on MSR-Gesture3D and 95.25% on UT-kinect, and achieves a competitive performance of 86.21% on MSR-action ...
ACKNOWLEDGEMENTS The authors would like to thank Anuj Srivastava for his assistance and useful discussions about this work. ...
doi:10.1109/icpr.2014.602
dblp:conf/icpr/SlamaWD14
fatcat:wodskg2p2zeghixfpomtbw3kqu
3D spatial interaction
2011
ACM SIGGRAPH 2011 Courses on - SIGGRAPH '11
Acknowledgements A special thanks to Doug Bowman, Ernst Kruijff, Ivan Poupyrev, Chad Wingrave, Richard Marks, Salman Cheema, Kris Rivera, and members of the Interactive Systems and User Experience Lab ...
Malbraaten, Fedor Korsakov, David Laidlaw, Fritz Drury, Robert Zeleznik, Andy Forsberg, Jason Sobel, and the research groups in the University of Minnesota Interactive Visualization Lab and the Brown ...
mode reminiscent of the 3D line drawings of 3-Draw [Sachs et al. 1991] , and a mode in which clouds of random small triangle particles were left behind the wand as it was swept through the air. ...
doi:10.1145/2037636.2037637
fatcat:txg7452wv5b63evfefwqaod2wy
Extended multitouch
2012
Proceedings of the 25th annual ACM symposium on User interface software and technology - UIST '12
Extracting finger and hand posture from a Kinect depth camera (left) and integrating with a pen+touch sketch interface (right). ...
We further present a practical solution to achieve this on tabletop displays based on mounting a single commodity depth camera above a horizontal surface. ...
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. ...
doi:10.1145/2380116.2380177
dblp:conf/uist/MurugappanVER12
fatcat:iu7gwawqwnd4ta4to2xdcrt7na
« Previous
Showing results 1 — 15 out of 1,966 results