Show simple item record

dc.contributor.authorEinabadi, Farshaden_US
dc.contributor.authorGuillemaut, Jean-Yvesen_US
dc.contributor.authorHilton, Adrianen_US
dc.contributor.editorRitschel, Tobiasen_US
dc.contributor.editorWeidlich, Andreaen_US
dc.date.accessioned2023-06-27T06:41:45Z
dc.date.available2023-06-27T06:41:45Z
dc.date.issued2023
dc.identifier.isbn978-3-03868-229-5
dc.identifier.isbn978-3-03868-228-8
dc.identifier.issn1727-3463
dc.identifier.urihttps://doi.org/10.2312/sr.20231125
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/sr20231125
dc.description.abstractThis contribution introduces a two-step, novel neural rendering framework to learn the transformation from a 2D human silhouette mask to the corresponding cast shadows on background scene geometries. In the first step, the proposed neural renderer learns a binary shadow texture (canonical shadow) from the 2D foreground subject, for each point light source, independent of the background scene geometry. Next, the generated binary shadows are texture-mapped to transparent virtual shadow map planes which are seamlessly used in a traditional rendering pipeline to project hard or soft shadows for arbitrary scenes and light sources of different sizes. The neural renderer is trained with shadow images rendered from a fast, scalable, synthetic data generation framework. We introduce the 3D Virtual Human Shadow (3DVHshadow) dataset as a public benchmark for training and evaluation of human shadow generation. Evaluation on the 3DVHshadow test set and real 2D silhouette images of people demonstrates the proposed framework achieves comparable performance to traditional geometry-based renderers without any requirement for knowledge or computationally intensive, explicit estimation of the 3D human shape. We also show the benefit of learning intermediate canonical shadow textures, compared to learning to generate shadows directly in camera image space. Further experiments are provided to evaluate the effect of having multiple light sources in the scene, model performance with regard to the relative camera-light 2D angular distance, potential aliasing artefacts related to output image resolution, and effect of light sources' dimensions on shadow softness.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies -> Computer graphics; Neural networks
dc.subjectComputing methodologies
dc.subjectComputer graphics
dc.subjectNeural networks
dc.titleLearning Projective Shadow Textures for Neural Rendering of Human Cast Shadows from Silhouettesen_US
dc.description.seriesinformationEurographics Symposium on Rendering
dc.description.sectionheadersPatterns and Shadows
dc.identifier.doi10.2312/sr.20231125
dc.identifier.pages63-75
dc.identifier.pages13 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License