A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Theory and methods of lightfield photography
2009
ACM SIGGRAPH ASIA 2009 Courses on - SIGGRAPH ASIA '09
The mathematical foundations will be used to develop computational methods for lightfield processing and image rendering, including digital refocusing and perspective viewing. ...
As part of the course, we will demonstrate a number of working light-field cameras that implement different methods for radiance capture, including the microlens approach of Lippmann and the plenoptic ...
The mathematical foundations will be used to develop computational methods for lightfield processing and image rendering, including digital refocusing and perspective viewing. ...
doi:10.1145/1665817.1665835
dblp:conf/siggraph/GeorgievL09
fatcat:6m3aawpuzzhcnfgs6or5my5wta
The Discrete Focal Stack Transform
2008
Zenodo
Publication in the conference proceedings of EUSIPCO, Lausanne, Switzerland, 2008 ...
The FSP method is based on the extraction of an appropriate dilated 2D slice in the 4D Fourier transform of the lightfield. ...
In this paper we have presented a new discretization of the Photography operator based on the trigonometric interpolation of the lightfield and an approximation of the integration process. ...
doi:10.5281/zenodo.40839
fatcat:iukd5xlnxrb3rcfo6ias6t77ue
Theory and Methods of Light-Field Photography
[article]
2008
Eurographics State of the Art Reports
The mathematical foundations will be used to develop computational methods for lightfield processing and image rendering, including digital refocusing and perspective viewing. ...
As part of the course, we will demonstrate a number of working light-field cameras that implement different methods for radiance capture, including the microlens approach of Lippmann and the plenoptic ...
Radiance Theory and Modeling The theory and practice of radiance photography requires a precise mathematical model of the radiance function and of the basic transformations that can be applied to it. ...
doi:10.2312/egt.20081057
fatcat:mavdokwsvrfand6jjuaec24ake
Unified Frequency Domain Analysis of Lightfield Cameras
[chapter]
2008
Lecture Notes in Computer Science
The resulting method is applicable to all lightfield cameras, lens-based and mask-based. The generality of our approach suggests new designs for lightfield cameras. ...
Using this interpretation, we derive a mathematical theory of recovering the 4D spatial and angular information from the multiplexed 2D frequency representation. ...
It is a new method of demultiplexing captured lightfield data from any lightfield camera, mask based or microlens based. ...
doi:10.1007/978-3-540-88690-7_17
fatcat:rh6arhkhbbccnnn3yyqek6ku7m
Lytro camera technology: theory, algorithms, performance analysis
2013
Multimedia Content and Mobile Devices
We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. ...
This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. ...
The proposed method of Lippmann uses microlenses to capture individual rays. ...
doi:10.1117/12.2013581
fatcat:jjpa2dzkfrggxjlej5i7r7epv4
The radon image as plenoptic function
2014
2014 IEEE International Conference on Image Processing (ICIP)
Finally, we demonstrate how various 3D views and differently-focused pictures can be rendered directly from this new representation. ...
Moreover, we show that the original 3D luminous density of the scene can be recovered via the inverse Radon transform. ...
One advantage of our approach compared to prior art lightfield photography is that our plenoptic function is the Radon image, which is 3D instead of 4D. ...
doi:10.1109/icip.2014.7025385
dblp:conf/icip/GeorgievTLGV14
fatcat:6n3uxehpvrep7dew55t3r57twm
Optical sectioning microscopy through single-shot lightfield protocol
2020
IEEE Access
In this paper, we change the paradigm and report a method that is based in the lightfield concept, and that provides optical sectioning for 3D microscopy images after a single-shot capture. ...
We provide the theoretical derivation of the algorithm, and demonstrate its utility by applying it to simulations and to experimental data. ...
EXPERIMENTAL VERIFICATION To perform the experimental validation of the theory, we first implemented the Fourier lightfield microscope. ...
doi:10.1109/access.2020.2966323
fatcat:cptxmr3xtrartdvoynx76o6skm
Widening Viewing Angles of Automultiscopic Displays Using Refractive Inserts
2018
IEEE Transactions on Visualization and Computer Graphics
(lightfield). ...
We analyze the consequences of this warp and build a prototype with a 93% increase in the effective viewing angle. ...
Lightfields have been analyzed from the perspective of computational photography [31] , computer vision [47] and in rendering [4, 9, 33 ]. ...
doi:10.1109/tvcg.2018.2794599
pmid:29543173
fatcat:3hkrsxtffrg6pnoq7n4ytnqmkq
Progressive versus Random Projections for Compressive Capture of Images, Lightfields and Higher Dimensional Visual Signals
[article]
2011
arXiv
pre-print
Computational photography involves sophisticated capture methods. ...
A new trend is to capture projection of higher dimensional visual signals such as videos, multi-spectral data and lightfields on lower dimensional sensors. ...
Introduction Computational photography involves sophisticated capture methods to capture high dimensional visual signals using invertible multiplexing of signals. ...
arXiv:1109.1865v1
fatcat:5c5jksf6a5ddniafoqhsn5fz2a
The Sightfield: Visualizing Computer Vision, and Seeing Its Capacity to "See"
2014
2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops
I will introduce and describe the concept of a "sightfield", a time-reversed lightfield that can be visualized with time-exposure photography, to make vision (i.e. the capacity to see) visible. ...
Figure 1 : Seeing Surveillance and visualizing the capacity of a surveillance camera to "see"... Leftmost: a linear array of computer-controlled LED lights is held in front of a surveillance camera. ...
: We have also developed new methods of data visualization using long-exposure photographic methods known as abakography [35] . ...
doi:10.1109/cvprw.2014.127
dblp:conf/cvpr/Mann14
fatcat:afu2jgmhivcrjn2ulmrxn4stey
Fourier depth of field
2009
ACM Transactions on Graphics
and FREDO DURAND MIT CSAIL and NICOLAS HOLZSCHUCH INRIA, Grenoble university, CNRS and FRANCOIS SILLION INRIA, Grenoble university, CNRS Optical systems used in photography and cinema produce depth of ...
This paper introduces an analysis of focusing and depth of field in the frequency domain, allowing a practical characterization of a light field's frequency content both for image and aperture sampling ...
We are also grateful to the MIT/ARTIS pre-reviewers and to Laurence Boissieux for the Kitchen model. ...
doi:10.1145/1516522.1516529
fatcat:t4wgkesg4zdmxecmm32dtrfsfq
Three Techniques for Rendering Generalized Depth of Field Effects
[chapter]
2010
Proceedings of the 2009 SIAM Conference on "Mathematics for Industry"
Each of these methods has a different set of strengths and weaknesses, so it is useful to have all three available. ...
The ray tracing approach allows the amount of blur to vary with depth in an arbitrary way. The compositing method creates a synthetic image with focus and aperture settings that vary per-pixel. ...
Application: Computational Photography The direct extension of our method to computational photography would involve programming a digital camera to capture dozens of images, while changing focus and aperture ...
doi:10.1137/1.9781611973303.6
fatcat:r5lxyjk7ynb3hmtzxw4c425hly
The multifocus plenoptic camera
2012
Digital Photography VIII
This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed. ...
The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. ...
High dynamic range (HDR), 1 panoramas, 2 stereo 3D 3, 4 and lightfield imaging 5, 6 are some examples of innovations that extend photography beyond its traditional boundaries. ...
doi:10.1117/12.908667
dblp:conf/dphoto/GeorgievL12
fatcat:ko6tzv5akndkvhr7rxxjc5sqjm
Computational Photography
[article]
2007
Eurographics State of the Art Reports
just some of the new applications found in Computational Photography. ...
Computational photography combines plentiful computing, digital sensors, modern optics, actuators, probes and smart lights to escape the limitations of traditional film cameras and enables novel imaging ...
methods limit digital photography to like photography methods limit digital photography to film film--like results or less. like results or less ...
doi:10.2312/egt.20071066
fatcat:btr4b2hrszegnhrukfnwe7i7oe
Computational Photography
[article]
2006
Eurographics State of the Art Reports
just some of the new applications found in Computational Photography. ...
A class of modern reconstruction methods is also emerging. ...
(iii) Confocal Methods and Synthetic Aperture methods: As described above, one can achieve very narrow depthof-field image by collecting a widely divergent rays from each imaged point and these methods ...
doi:10.2312/egst.20061051
fatcat:6ig2c522qve3bejhvvb4jemipu
« Previous
Showing results 1 — 15 out of 96 results