dc.contributor.author | Kubota, Akira | en_US |
dc.contributor.author | Takahashi, Keita | en_US |
dc.contributor.author | Aizawa, Kiyoharu | en_US |
dc.contributor.author | Chen, Tsuhan | en_US |
dc.contributor.editor | Alexander Keller and Henrik Wann Jensen | en_US |
dc.date.accessioned | 2014-01-27T14:30:28Z | |
dc.date.available | 2014-01-27T14:30:28Z | |
dc.date.issued | 2004 | en_US |
dc.identifier.isbn | 3-905673-12-6 | en_US |
dc.identifier.issn | 1727-3463 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/EGWR/EGSR04/235-242 | en_US |
dc.description.abstract | We present a novel reconstruction method that can synthesize an all in-focus view from under-sampled light fields, significantly suppressing aliasing artifacts. The presented method consists of two steps; 1) rendering multiple views at a given view point by performing light field rendering with different focal plane depths; 2) iteratively reconstructing the all in-focus view by fusing the multiple views. We model the multiple views and the desired all in-focus view as a set of linear equations with a combination of textures at the focal depths. Aliasing artifacts can be modeled as spatially (shift) varying filters. We can solve this set of linear equations by using an iterative reconstruction approach. This method effectively integrates focused regions in each view into an all in-focus view without any local processing steps such as estimation of depth or segmentation of the focused regions. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | All-focused light field rendering | en_US |
dc.description.seriesinformation | Eurographics Workshop on Rendering | en_US |