Filters








110,831 Hits in 3.5 sec

Conditional Prior Networks for Optical Flow [chapter]

Yanchao Yang, Stefano Soatto
2018 Lecture Notes in Computer Science  
We introduce a novel architecture, called Conditional Prior Network (CPN), and show how to train it to yield a conditional prior.  ...  Once the prior is learned in a supervised fashion, one can easily learn the full map to infer optical flow directly from two or more images, without any need for (additional) supervision.  ...  However, it provides a base for unsupervised learning of optical flow, and a stage to show the benefit of semi-unsupervised optical flow learning, that utilizes both the conditional prior (CPN) learned  ... 
doi:10.1007/978-3-030-01267-0_17 fatcat:q4z7topa7vaa3hftp22zyyxalq

Perceptual Loss for Convolutional Neural Network Based Optical Flow Estimation

Zong-qing LU, Xiang ZHU, Qing-min LIAO
2017 DEStech Transactions on Computer Science and Engineering  
Motivated by the success in image transformation tasks, a perceptual loss function is used for training the network for optical flow estimation.  ...  In this work, rather training feature descriptors via CNNs, an end-to-end fully convolutional network, is developed for solving optical flow from a pair of images.  ...  Variational Auto-encoder For optical flow field, there is no label to train a network for classification task.  ... 
doi:10.12783/dtcse/smce2017/12437 fatcat:6oeq4zzshffjbfrhimeuytf36q

Learned Video Compression via Joint Spatial-Temporal Correlation Exploration

Haojie Liu, Han Shen, Lichao Huang, Ming Lu, Tong Chen, Zhan Ma
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Thus, in this paper, we propose to exploit the temporal correlation using both first-order optical flow and second-order flow prediction.  ...  We suggest an one-stage learning approach to encapsulate flow as quantized features from consecutive frames which is then entropy coded with adaptive contexts conditioned on joint spatial-temporal priors  ...  Our unsupervised flow learning does not rely on a well pre-trained optical flow estimation network, such as FlowNet2 (Ilg et al. 2017; Sun et al. 2018) , and can derive the compressed optical flow from  ... 
doi:10.1609/aaai.v34i07.6825 fatcat:naduixdarnfy3ebtcw55ht2h5e

Learned Video Compression via Joint Spatial-Temporal Correlation Exploration [article]

Haojie Liu, Han shen, Lichao Huang, Ming Lu, Tong Chen, Zhan Ma
2019 arXiv   pre-print
Thus, in this paper, we propose to exploit the temporal correlation using both first-order optical flow and second-order flow prediction.  ...  We suggest an one-stage learning approach to encapsulate flow as quantized features from consecutive frames which is then entropy coded with adaptive contexts conditioned on joint spatial-temporal priors  ...  Our unsupervised flow learning does not rely on a well pre-trained optical flow estimation network, such as FlowNet2 (Ilg et al. 2017; Sun et al. 2018) , and can derive the compressed optical flow from  ... 
arXiv:1912.06348v1 fatcat:h6chbcl52nbwtbpx6hrrzj7fme

Multimodal reconstruction of microvascular-flow distributions using combined two-photon microscopy and Doppler optical coherence tomography

Louis Gagnon, Sava Sakadžic, Fréderic Lesage, Emiri T. Mandeville, Qianqian Fang, Mohammad A. Yaseen, David A. Boas
2015 Neurophotonics  
Here, we investigated whether the use of Doppler optical coherence tomography (DOCT) flow measurements in individual vessel segments can help in reconstructing [Formula: see text] across the entire vasculature  ...  Computing microvascular cerebral blood flow ([Formula: see text]) in real cortical angiograms is challenging.  ...  Acknowledgments We thank Axle Pries and David Kleinfeld for fruitful discussions regarding this work.  ... 
doi:10.1117/1.nph.2.1.015008 pmid:26157987 pmcid:PMC4478873 fatcat:djgrl6mwffdtjh54ek54hqknji

All One Needs to Know about Priors for Deep Image Restoration and Enhancement: A Survey [article]

Yunfan Lu, Yiqi Lin, Hao Wu, Yunhao Luo, Xu Zheng, Lin Wang
2022 arXiv   pre-print
Due to its ill-posed property, plenty of works have explored priors to facilitate training deep neural networks (DNNs).  ...  Our work covers five primary contents: (1) A theoretical analysis of priors for deep image restoration and enhancement; (2) A hierarchical and structural taxonomy of priors commonly used in the DL-based  ...  For example, [17] , [68] use optical flow to guide the DCN or self-attention for VSR, and [69] , [70] , [71] use the optical flow to generate the temporal sharpness prior for video deblurring.  ... 
arXiv:2206.02070v1 fatcat:icu7hwua3jggbp7owl2l5mgyfu

Optical Flow Estimation for Spiking Camera [article]

Liwen Hu, Rui Zhao, Ziluo Ding, Lei Ma, Boxin Shi, Ruiqin Xiong, Tiejun Huang
2022 arXiv   pre-print
Codes and datasets refer to https://github.com/Acnext/Optical-Flow-For-Spiking-Camera.  ...  Further, for training SCFlow, we synthesize two sets of optical flow data for the spiking camera, SPIkingly Flying Things and Photo-realistic High-speed Motion, denoted as SPIFT and PHM respectively, corresponding  ...  Due to high similarity of motion in adjacent time, the last predicted optical flow is used as a prior motion for the current testing i.e., the prior motion for estimating W i,i+∆t is Ŵi−∆t,i .  ... 
arXiv:2110.03916v3 fatcat:zulze65yvzg55dlhubfp6suwkm

Using Visual Anomaly Detection for Task Execution Monitoring [article]

Santosh Thoduka and Juergen Gall and Paul G. Plöger
2021 arXiv   pre-print
A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion.  ...  We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752  ...  The network predicts a future optical flow image conditioned on a past optical flow image and a latent vector sampled from the distribution output by the posterior network (during training) or the prior  ... 
arXiv:2107.14206v1 fatcat:cyncui2gjjexfka2rl7uzgehqe

MoNet: Deep Motion Exploitation for Video Object Segmentation

Huaxin Xiao, Jiashi Feng, Guosheng Lin, Yu Liu, Maojun Zhang
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Concretely, MoNet exploits computed motion cue (i.e., optical flow) to reinforce the representation of the target frame by aligning and integrating representations from its neighbors.  ...  Moreover, MoNet exploits motion inconsistency and transforms such motion cue into foreground/background prior to eliminate distraction from confusing instances and noisy regions.  ...  The triple inputs are passed to a segmentation network [4] and an optical flow estimation network [9] , outputting their appearance features and optical flow.  ... 
doi:10.1109/cvpr.2018.00125 dblp:conf/cvpr/XiaoFLLZ18 fatcat:cn2j6jdzmrajjasc7gl7g77jxi

Consistent depth of moving objects in video

Zhoutong Zhang, Forrester Cole, Richard Tucker, William T. Freeman, Tali Dekel
2021 ACM Transactions on Graphics  
By recursively unrolling the scene-flow prediction MLP over varying time steps, we compute both short-range scene flow to impose local smooth motion priors directly in 3D, and long-range scene flow to  ...  We formulate this objective in a new test-time training framework where a depth-prediction CNN is trained in tandem with an auxiliary scene-flow prediction MLP over the entire input video.  ...  The depth network is first initialized using a data-driven prior (pretrained weights), and then finetuned in tandem with the scene-flow network for a given input video, using a smooth-motion prior and  ... 
doi:10.1145/3450626.3459871 fatcat:3syvszwnl5cwhbaqkphv63vdca

Dance with Flow: Two-in-One Stream Action Detection [article]

Jiaojiao Zhao, Cees G.M. Snoek
2019 arXiv   pre-print
We propose to embed RGB and optical-flow into a single two-in-one stream network with new layers.  ...  A motion condition layer extracts motion information from flow images, which is leveraged by the motion modulation layer to generate transformation parameters for modulating the low-level RGB features.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.  ... 
arXiv:1904.00696v3 fatcat:vu4tytftizdvdazzdxgbxyne5y

Dance With Flow: Two-In-One Stream Action Detection

Jiaojiao Zhao, Cees G. M. Snoek
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We propose to embed RGB and optical-flow into a single twoin-one stream network with new layers.  ...  A motion condition layer extracts motion information from flow images, which is leveraged by the motion modulation layer to generate transformation parameters for modulating the low-level RGB features.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.  ... 
doi:10.1109/cvpr.2019.01017 dblp:conf/cvpr/ZhaoS19 fatcat:dn6dihzvufeshnmt6kdvm46dmq

Unsupervised Domain Adaptation by Optical Flow Augmentation in Semantic Segmentation [article]

Oluwafemi Azeez
2019 arXiv   pre-print
Hence, by augmenting images with dense optical flow map, domain adaptation in semantic segmentation can be improved.  ...  Solving this can totally eliminate the need for labeling real-life datasets completely. Class balanced self-training is one of the existing techniques that attempt to reduce the domain gap.  ...  Augment RGB image with optical flow maps and then use class balanced domain adaptation for domain adaptation. Optical Flow Generator [7] was created for the sole purpose of generating flow maps.  ... 
arXiv:1911.09652v1 fatcat:qq5shbwxpvd2neetnvx45x2qmi

Neural Video Compression using Spatio-Temporal Priors [article]

Haojie Liu, Tong Chen, Ming Lu, Qiu Shen, Zhan Ma
2019 arXiv   pre-print
In this work, we propose a neural video compression framework, leveraging the spatial and temporal priors, independently and jointly to exploit the correlations in intra texture, optical flow based temporal  ...  Spatial priors are generated using downscaled low-resolution features, while temporal priors (from previous reference frames and residuals) are captured using a convolutional neural network based long-short  ...  optical flow encoder and decoder network [18] .  ... 
arXiv:1902.07383v2 fatcat:brynmcohtzdtdo3nyymhsshubi

Im2Flow: Motion Hallucination from Static Images for Action Recognition

Ruohan Gao, Bo Xiong, Kristen Grauman
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
First, we devise an encoder-decoder convolutional neural network and a novel optical flow encoding that can translate a static image into an accurate flow map.  ...  Second, we show the power of hallucinated flow for recognition, successfully transferring the learned motion into a standard two-stream network for activity recognition.  ...  We thank Suyog Jain, Chao-Yeh Chen, Aron Yu, Yu-Chuan Su, Tushar Nagarajan and Zhengpei Yang for helpful input on experiments or reading paper drafts, and also gratefully acknowledge a GPU donation from  ... 
doi:10.1109/cvpr.2018.00622 dblp:conf/cvpr/GaoXG18 fatcat:ofat4dobw5aj7apx4ikmjymyqe
« Previous Showing results 1 — 15 out of 110,831 results