Filters








316,242 Hits in 4.3 sec

All-focused light field rendering [article]

Akira Kubota, Keita Takahashi, Kiyoharu Aizawa, Tsuhan Chen
2004 Symposium on Rendering  
The presented method consists of two steps; 1) rendering multiple views at a given view point by performing light field rendering with different focal plane depths; 2) iteratively reconstructing the all  ...  We present a novel reconstruction method that can synthesize an all in-focus view from under-sampled light fields, significantly suppressing aliasing artifacts.  ...  All-focused light field rendering through fusion Light field parameterization and rendering In this section, we define the light field parameterizations used in this paper and describe the conventional  ... 
doi:10.2312/egwr/egsr04/235-242 fatcat:iouq546h3za45oxyrwsvuqpl6y

An efficient method for all-in-focused light field rendering

Wei Wen, Zhi Jiang Zhang, Si Cong Yao, Dan Zeng
2010 2010 3rd International Conference on Computer Science and Information Technology  
Light field rendering (LFR) is an image-based method for generating novel views from a set of camera images.  ...  When sample of camera array is sparse, conventional light field rendering would cause aliasing artifacts.  ...  Figure 1 . 1 Algorithm flow of all-in-focus LFR Figure 2 . 2 Sparse disparity map and distribution function of "Tsukuba" Figure 3 . 3 Light field parameterizations and rendering method Let be a novel  ... 
doi:10.1109/iccsit.2010.5563964 fatcat:6dgxhkpvdnf7zl3hpvocggykoy

Real-time per-pixel focusing method for light field rendering

T. Chlubna, T. Milet, P. Zemčík
2021 Computational Visual Media  
Without information about depths in the scene, proper focusing of the light field scene is limited to a single focusing distance.  ...  The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes, based on statistical analysis of the pixel values contributing to the final  ...  Light field focusing The original light field rendering approach [3] and other derived methods support one focusing plane in which the image is constructed and focused.  ... 
doi:10.1007/s41095-021-0205-0 fatcat:3rjxelfv65badki7xqae24gdqq

Edge detection with meta-lens: from one dimension to three dimensions

Mu Ku Chen, Yue Yan, Xiaoyuan Liu, Yongfeng Wu, Jingcheng Zhang, Jiaqi Yuan, Zhengnan Zhang, Din Ping Tsai
2021 Nanophotonics  
All of the light field information of objects in the scene can be captured and computed.  ...  The focused edge images can be obtained by the sub-image reconstruction of the light field image.  ...  The light field rendering process is shown in Figure S5 .  ... 
doi:10.1515/nanoph-2021-0239 fatcat:sqbhacydwrehthvtjqt7p7hrny

A Real Time Interactive Dynamic Light Field Transmission System

Yebin Liu, Qionghai Dai, Wenli Xu
2006 2006 IEEE International Conference on Multimedia and Expo  
In this work, we implemented a 3D TV system with real-time data acquisition, compression, internet transmission, light field rendering, and free-viewpoint control of dynamic scenes.  ...  Our system consists of an 8×8 light field camera array, 16 producer PCs, a streaming server system and several clients.  ...  Fig. 4 . 4 Block diagram for light field transmission Fig. 5 . 5 Half of views in the first field of DLF sequence taken with our light field camera Fig. 6 . 6 Rendering focuses on different objects  ... 
doi:10.1109/icme.2006.262686 dblp:conf/icmcs/LiuDX06 fatcat:lloqqmwlfrfqrjwielyrrbzer4

Aperture Supervision for Monocular Depth Estimation

Pratul P. Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, Jonathan T. Barron
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
We train a monocular depth estimation network end-to-end to predict the scene depths that best explain these finite aperture images as defocus-blurred renderings of the input all-in-focus image.  ...  To enable learning algorithms to use aperture effects as supervision, we introduce two differentiable aperture rendering functions that use the input image and predicted depths to simulate the depth-of-field  ...  Our light field model uses a CNN to predict a depth map that is then used to warp the input 2D all-in-focus image into an estimate of the 4D light field inside the camera, which is then focused and integrated  ... 
doi:10.1109/cvpr.2018.00669 dblp:conf/cvpr/SrinivasanGWNB18 fatcat:kniwjsoppfci3bjpkk5lejnm2e

Real Time Focusing and Directional Light Projection Method for Medical Endoscope Video

Yuxiong Chen, Guangdong Nanfang Vocational College, Guangdong, CO 529000 China, Ronghe Wang, Jian Wang, Shilong Ma, State Key Laboratory of Software Development Environment, Beihang University, Beijing, CO 100191 China, Guangdong Nanfang Vocational College, Guangdong, CO 529000 China, State Key Laboratory of Software Development Environment, Beihang University, Beijing, CO 100191 China
2017 International Journal of Future Computer and Communication  
Index Terms-Light field capture, medical light field camera, micro lens array of light field, projection direction light.  ...  We use the function of first shooting and then focusing and automatic focusing to complete endoscopic process, and transform the environment from fuzzy to high definite by software to adjust focus.  ...  In 1996, Levoy proposed light field rendering theory (light field rendering). In 2005, Ng invented the first handheld light field camera. In 2006, Levoy developed a light-field microscope.  ... 
doi:10.18178/ijfcc.2017.6.4.506 fatcat:x6jbwqe6c5g7zdxtgn6obaqb54

Aperture Supervision for Monocular Depth Estimation [article]

Pratul P. Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, Jonathan T. Barron
2018 arXiv   pre-print
We train a monocular depth estimation network end-to-end to predict the scene depths that best explain these finite aperture images as defocus-blurred renderings of the input all-in-focus image.  ...  To enable learning algorithms to use aperture effects as supervision, we introduce two differentiable aperture rendering functions that use the input image and predicted depths to simulate the depth-of-field  ...  Our light field model uses a CNN to predict a depth map that is then used to warp the input 2D all-in-focus image into an estimate of the 4D light field inside the camera, which is then focused and integrated  ... 
arXiv:1711.07933v2 fatcat:cjyja2zyvnh43obvq3nsxugjoq

A Study on Visual Perception of Light Field Content [article]

Ailbhe Gill, Emin Zerman, Cagri Ozcinar, Aljosa Smolic
2020 arXiv   pre-print
As they may be rendered and consumed in various ways, a primary challenge that arises is the definition of what visual perception of light field content should be.  ...  Our analysis highlights characteristics of user behaviour in light field imaging applications. The light field data set and attention data are provided with this paper.  ...  Slices of a light field focused at a sequence of depths form what is known as a focal stack.  ... 
arXiv:2008.03195v1 fatcat:ow3gpm5gkrc75hen4z4cidjahe

Estimation of Signal Distortion Using Effective Sampling Density for Light Field-Based Free Viewpoint Video

Hooman Shidanshidi, Farzad Safaei, Wanqing Li
2015 IEEE transactions on multimedia  
In a light field-based free viewpoint video (LF-based FVV) system, effective sampling density (ESD) is defined as the number of rays per unit area of the scene that has been acquired and is selected in  ...  Abstract-In a light field-based free viewpoint system (LFbased FVV), effective sampling density (ESD) is defined as the number of rays per unit area of the scene that has been acquired and is selected  ...  These include layered light field [8] , surface light field [9] , scam light field [10] , pop-up light field [11] , all-in-focused light field [12] , and dynamic reparameterized light field [13]  ... 
doi:10.1109/tmm.2015.2447274 fatcat:5dze6a3rkzglbokl3zf76ekjji

Aliasing Detection and Reduction in Plenoptic Imaging

Zhaolin Xiao, Qing Wang, Guoqing Zhou, Jingyi Yu
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, etc.  ...  rendering techniques.  ...  All these techniques attempt to avoid aliasing before light field rendering whereas we aim to detect potential aliasing regions and then reduce aliasing at the rendering stage.  ... 
doi:10.1109/cvpr.2014.425 dblp:conf/cvpr/XiaoWZY14 fatcat:aasruhkt6nauxdijuu52a6i5gy

Masking Light Fields to Remove Partial Occlusion

Scott McCloskey
2014 2014 22nd International Conference on Pattern Recognition  
We address partial occlusion due to objects close to a microlens-based light field camera.  ...  Relative to past approaches for light field completion, we show significantly better performance for the small viewpoint changes inherent to a handheld light field camera, and avoid the need for time-domain  ...  Ng's shift-and-add method renders the light field to an image focused on particular object by shifting the sub-aperture images so that they are all aligned with respect to the object.  ... 
doi:10.1109/icpr.2014.358 dblp:conf/icpr/McCloskey14 fatcat:mj2xoivgr5bt3dr2oegyspecd4

COMPUTER GRAPHICS OPTIQUE Optical Superposition of Projected Computer Graphics [article]

Aditi Majumder, GregWelch
2001 ICAT-EGVE 2014 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments  
We present some ideas and demonstrations for a hybrid projectorbased rendering and display technique we call Computer Graphics Optique.  ...  We believe that this technique offers the possibility of a new paradigm for combined rendering and projector-based display.  ...  We use a focused projector to render the inset region, and a defocused projector to render the rest.  ... 
doi:10.2312/egve/egve01/209-218 fatcat:ubxb2w6yfbcmph66lti2roocx4

Real-time Depth of Field Rendering via Dynamic Light Field Generation and Filtering

Xuan Yu, Rui Wang, Jingyi Yu
2010 Computer graphics forum (Print)  
We then directly synthesize DoF effects from the sampled light field.  ...  We present a new algorithm for efficient rendering of high-quality depth-of-field (DoF) effects.  ...  All results are rendered by generating a dynamic light field with pixel resolution 512x512 and a default spatial resolution of 36.  ... 
doi:10.1111/j.1467-8659.2010.01797.x fatcat:hibobftc7zhbnovemnxogeyanq

Partial light field tomographic reconstruction from a fixed-camera focal stack [article]

A. Mousnier, E. Vural, C. Guillemot
2015 arXiv   pre-print
This paper describes a novel approach to partially reconstruct high-resolution 4D light fields from a stack of differently focused photographs taken with a fixed camera.  ...  Thanks to the high angular resolution we achieve by suitably exploiting the image content captured over a large interval of focus distances, we are able to render puzzling perspective shifts although the  ...  It opens new perspectives for high resolution light field rendering.  ... 
arXiv:1503.01903v1 fatcat:pgruzqytr5en5huefn4cc32ofi
« Previous Showing results 1 — 15 out of 316,242 results