A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Learning Visual Motion Segmentation Using Event Surfaces
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Event-based cameras have been designed for scene motion perception -their high temporal resolution and spatial data sparsity converts the scene into a volume of boundary trajectories and allows to track and analyze the evolution of the scene in time. Analyzing this data is computationally expensive, and there is substantial lack of theory on densein-time object motion to guide the development of new algorithms; hence, many works resort to a simple solution of discretizing the event stream and
doi:10.1109/cvpr42600.2020.01442
dblp:conf/cvpr/MitrokhinHFA20
fatcat:ibmz64rxrnefpedu6hicgzckpu