Rendering of densely recorded light fields
Light fields present an alternate approach for producing images of a high degree of realism, by capturing real world data in the form of images, or by traditional techniques like raytracing a synthetic scene. In both cases, the produced data can be utilized to render images from positions which were previously not recorded or with different camera parameters and configurations. The resolution and spatial density, at which such light fields are recorded, influence the mass of produced data that
... produced data that has to be handled. This work focuses on densely recorded light fields and attempts to produce synthesized images computed from the available data, defined by a camera moving through space. Synthesized cameras are also able to change their aperture size and focus setting. Rendered cameras behave according to the thin lens model. A method for extraction of relevant light field images is proposed. For rendering of the data, two different approaches are evaluated. The first approach collects rays which are present in the light field in synthetic sensor plates. In an alternative approach, rays are collected in a standard hash map, and rendered by constructing and querying a kd-tree. Both approaches yield a set of properties which make them useful in different scenarios, and can also be combined to an hybrid renderer. The proposed system is intended to run on several machines in parallel.