Show simple item record

dc.contributor.authorLi, Zhongen_US
dc.contributor.authorSong, Liangchenen_US
dc.contributor.authorLiu, Celongen_US
dc.contributor.authorYuan, Junsongen_US
dc.contributor.authorXu, Yien_US
dc.contributor.editorGhosh, Abhijeeten_US
dc.contributor.editorWei, Li-Yien_US
dc.date.accessioned2022-07-01T15:38:02Z
dc.date.available2022-07-01T15:38:02Z
dc.date.issued2022
dc.identifier.isbn978-3-03868-187-8
dc.identifier.issn1727-3463
dc.identifier.urihttps://doi.org/10.2312/sr.20221156
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/sr20221156
dc.description.abstractIn this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a function that indexes rays to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies --> Rendering; Computer vision problems; Virtual reality
dc.subjectComputing methodologies
dc.subjectRendering
dc.subjectComputer vision problems
dc.subjectVirtual reality
dc.titleNeuLF: Efficient Novel View Synthesis with Neural 4D Light Fielden_US
dc.description.seriesinformationEurographics Symposium on Rendering
dc.description.sectionheadersNeural Rendering
dc.identifier.doi10.2312/sr.20221156
dc.identifier.pages59-69
dc.identifier.pages11 pages


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License