Filters








499 Hits in 6.3 sec

A New Anthropomorphic Visual Sensor
人間の網膜特性をもつ視覚センサ

Cheon Woo SHIN, Seiji INOKUCHI
1995 Transactions of the Society of Instrument and Control Engineers  
The motion equation that relates the egomotion and/or the motion of the object in the scene to the optical flow is considerably simplified if the velocity is represented in a polar coordinate system, as  ...  This paper describes the development of an anthropomorphic visual sensor with retina-like structure to perform the polar mapping.  ...  O'Brien: Motion Stereo Using Ego-motion Complex Logarithmic Mapping, IEEE PAMI, 9-3, 356/369 (1987) 24) M. Tistarelli and G.  ... 
doi:10.9746/sicetr1965.31.1817 fatcat:3vvcmjl7bvcb7mtkrz6pyk2brq

Continuous-Time Trajectory Estimation for Event-based Vision Sensors

Elias Mueggler, Guillermo Gallego, Davide Scaramuzza
2015 Robotics: Science and Systems XI  
In this paper, we address ego-motion estimation for an event-based vision sensor using a continuous-time framework to directly integrate the information conveyed by the sensor.  ...  Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), do not output a sequence of video frames like standard cameras, but a stream of asynchronous events.  ...  We aim to use the DVS for ego-motion estimation.  ... 
doi:10.15607/rss.2015.xi.036 dblp:conf/rss/MuegglerGS15 fatcat:64mib77vnraf7mi747q4bu5nd4

Unsupervised Monocular Depth Learning in Dynamic Scenes [article]

Hanhan Li, Ariel Gordon, Hang Zhao, Vincent Casser, Anelia Angelova
2020 arXiv   pre-print
We present a method for jointly training the estimation of depth, ego-motion, and a dense 3D translation field of objects relative to the scene, with monocular photometric consistency being the sole source  ...  We show that this regularization alone is sufficient to train monocular depth prediction models that exceed the accuracy achieved in prior work for dynamic scenes, including methods that require semantic  ...  The latter predicts a 3D translation map T obj (u, v) at the original resolution for the moving objects and a 6D ego-motion vector M ego .  ... 
arXiv:2010.16404v2 fatcat:dwf7hypltnbijn3somdt6hv2zu

Peripheral Processing Facilitates Optic Flow-Based Depth Perception

Jinglin Li, Jens P. Lindemann, Martin Egelhaaf
2016 Frontiers in Computational Neuroscience  
We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions.  ...  Insects are thought to obtain depth information visually from the retinal image displacements ("optic flow") during translational ego-motion.  ...  ACKNOWLEDGMENTS We would like to thank Hanno Meyer for his critical reading of the manuscript, and Daniel Klimeck (Cognitronics and Sensor Systems, Faculty of Technology and CITEC, Bielefeld University  ... 
doi:10.3389/fncom.2016.00111 pmid:27818631 pmcid:PMC5073142 fatcat:b2s2ltjri5hazovtt4fol43i74

ClusterSLAM: A SLAM Backend for Simultaneous Rigid Body Clustering and Motion Estimation

Jiahui Huang, Sheng Yang, Zishuo Zhao, Yu-Kun Lai, Shimin Hu
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
In this paper, we exploit the consensus of 3D motions among the landmarks extracted from the same rigid body for clustering and estimating static and dynamic objects in a unified manner.  ...  Specifically, our algorithm builds a noise-aware motion affinity matrix upon landmarks, and uses agglomerative clustering for distinguishing those rigid bodies.  ...  We thank anonymous reviewers for the valuable discussions.  ... 
doi:10.1109/iccv.2019.00597 dblp:conf/iccv/HuangYZLH19 fatcat:r33c7b53kfeqlemxbqtk3w6abq

Event-based Vision: A Survey

Guillermo Gallego, Tobi Delbruck, Garrick Michael Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, Jorg Conradt, Kostas Daniilidis, Davide Scaramuzza
2020 IEEE Transactions on Pattern Analysis and Machine Intelligence  
We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow  ...  Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range.  ...  (whose apparent motion is induced by the camera's ego-motion) and the goal is to infer this causal classification for each event.  ... 
doi:10.1109/tpami.2020.3008413 pmid:32750812 fatcat:vlxvlv4uynh5rpw4qlmaywqlqq

ClusterSLAM: A SLAM backend for simultaneous rigid body clustering and motion estimation

Jiahui Huang, Sheng Yang, Zishuo Zhao, Yu-Kun Lai, Shi-Min Hu
2021 Computational Visual Media  
In this paper, we exploit the consensus of 3D motions for landmarks extracted from the same rigid body for clustering, and to identify static and dynamic objects in a unified manner.  ...  tracking ego-motion and multiple objects.  ...  Output: Cluster assignments θ : i → q (q = 0 for the static cluster), the MAP relative position of 3D landmarks w.r.t. their cluster iX q,i , the ego-motion of stereo camera t P c t , and the trajectory  ... 
doi:10.1007/s41095-020-0195-3 fatcat:hip62un4fzc4bchigrgj7aaoce

Neuromorphic vision sensors and preprocessors in system applications

Joerg Kramer, Giacomo Indiveri, Thierry M. Bernard
1998 Advanced Focal Plane Arrays and Electronic Cameras II  
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented.  ...  Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical ow m a t c hing are described.  ...  ACKNOWLEDGMENTS Fuyuki Okamoto designed and implemented the scanner logic for the timing-controller interface. Paul Verschure helped to set up the optical ow matching experiment.  ... 
doi:10.1117/12.324013 fatcat:mowopeofnjfdbmvkq6andxasva

Early Cognitive Vision: Using Gestalt-Laws for Task-Dependent, Active Image-Processing

Florentin Wörgötter, Norbert Krüger, Nicolas Pugeault, Dirk Calow, Markus Lappe, Karl Pauwels, Marc Van Hulle, Sovira Tan, Alan Johnston
2004 Natural Computing  
We will ask to what degree such strategies are also useful in a computer vision context.  ...  Specifically we will discuss, how to adapt them to technical systems where the substrate for the computations is vastly different from that in the brain.  ...  Acknowledgements This paper describes concepts and results from the ECOVISION project, funded by the European Commission, which we gratefully acknowledge. Notes  ... 
doi:10.1023/b:naco.0000036817.38320.fe fatcat:ryl3o7lkyjfq5nqngzx42ekrie

Event-based Vision: A Survey [article]

Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, Joerg Conradt, Kostas Daniilidis, Davide Scaramuzza
2020 arXiv   pre-print
We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow  ...  Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as high speed and high dynamic range.  ...  In [129] a learning-based approach for segmentation using motion-compensation is proposed: ANNs are used to estimate depth, ego-motion, segmentation masks of independently moving objects and object 3D  ... 
arXiv:1904.08405v2 fatcat:ffh6el7ojfg6jag5qm2hwnhweu

Learning image representations tied to ego-motion [article]

Dinesh Jayaraman, Kristen Grauman
2016 arXiv   pre-print
Specifically, we enforce that our learned features exhibit equivariance i.e. they respond predictably to transformations associated with distinct ego-motions.  ...  Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected  ...  They also depend on the depth map of the scene and the motion of dynamic objects in the scene. One could easily augment either the frames x i or the ego-pose y i with depth maps, when available.  ... 
arXiv:1505.02206v2 fatcat:rre3wfxaeje7decl2hikeabkby

Understanding social relationships in egocentric vision

Stefano Alletto, Giuseppe Serra, Simone Calderara, Rita Cucchiara
2015 Pattern Recognition  
We propose the adoption of the unique head mounted cameras first person perspective (ego-vision) to promptly detect people interaction in different social contexts.  ...  The proposed system shows competitive performances on both publicly available ego-vision datasets and ad hoc benchmarks built with real life situations.  ...  Acknowledgments This work was partially supported by the Fondazione Cassa di Risparmio diModena project: "Vision for Augmented Experience" and the PON R&C project DICET-INMOTO (Cod. PON04a2 D).  ... 
doi:10.1016/j.patcog.2015.06.006 fatcat:cffs4aivg5cflabtdwzcies56y

EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras [article]

Anton Mitrokhin, Chengxi Ye, Cornelia Fermuller, Yiannis Aloimonos, Tobi Delbruck
2020 arXiv   pre-print
We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixel-wise motion masks, egomotion and ground  ...  By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion.  ...  Network Input The raw data from the Dynamic Vision Sensor (DVS) is a continuous stream of events.  ... 
arXiv:1903.07520v2 fatcat:56uvc4u6cba3firmurxmwip244

Continuous-Time Visual-Inertial Odometry for Event Cameras

Elias Mueggler, Guillermo Gallego, Henri Rebecq, Davide Scaramuzza
2018 IEEE Transactions on robotics  
The event camera trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines.  ...  They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds.  ...  This research was supported by the National Centre of Competence in Research (NCCR) Robotics, the Qualcomm Innovation Fellowship, and the UZH Forschungskredit  ... 
doi:10.1109/tro.2018.2858287 fatcat:w4i6tihc35hsxcvqhbptsnx7te

A review of log-polar imaging for visual perception in robotics

V. Javier Traver, Alexandre Bernardino
2010 Robotics and Autonomous Systems  
It has been studied for about three decades and has surpassed conventional approaches in robotics applications, mainly the ones where real-time constraints make it necessary to utilize resource-economic  ...  The concise yet comprehensive review offered in this paper is intended to provide novel and experienced roboticists with a quick and gentle overview of log-polar vision and to motivate vision researchers  ...  for telling us about Ref  ... 
doi:10.1016/j.robot.2009.10.002 fatcat:sscwtx3nfbfvxmyn5gi36acssu
« Previous Showing results 1 — 15 out of 499 results