Show simple item record

dc.contributor.authorSun, Tianchengen_US
dc.contributor.authorLin, Kai-Enen_US
dc.contributor.authorBi, Saien_US
dc.contributor.authorXu, Zexiangen_US
dc.contributor.authorRamamoorthi, Ravien_US
dc.contributor.editorBousseau, Adrien and McGuire, Morganen_US
dc.date.accessioned2021-07-12T12:13:36Z
dc.date.available2021-07-12T12:13:36Z
dc.date.issued2021
dc.identifier.isbn978-3-03868-157-1
dc.identifier.issn1727-3463
dc.identifier.urihttps://doi.org/10.2312/sr.20211299
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/sr20211299
dc.description.abstractHuman portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectImage
dc.subjectbased rendering
dc.subjectComputational photography
dc.titleNeLF: Neural Light-transport Field for Portrait View Synthesis and Relightingen_US
dc.description.seriesinformationEurographics Symposium on Rendering - DL-only Track
dc.description.sectionheadersFaces and Body
dc.identifier.doi10.2312/sr.20211299
dc.identifier.pages155-166


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record