dc.contributor.author | Einabadi, Farshad | en_US |
dc.contributor.author | Guillemaut, Jean-Yves | en_US |
dc.contributor.author | Hilton, Adrian | en_US |
dc.contributor.editor | Ritschel, Tobias | en_US |
dc.contributor.editor | Weidlich, Andrea | en_US |
dc.date.accessioned | 2023-06-27T06:41:45Z | |
dc.date.available | 2023-06-27T06:41:45Z | |
dc.date.issued | 2023 | |
dc.identifier.isbn | 978-3-03868-229-5 | |
dc.identifier.isbn | 978-3-03868-228-8 | |
dc.identifier.issn | 1727-3463 | |
dc.identifier.uri | https://doi.org/10.2312/sr.20231125 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/sr20231125 | |
dc.description.abstract | This contribution introduces a two-step, novel neural rendering framework to learn the transformation from a 2D human silhouette mask to the corresponding cast shadows on background scene geometries. In the first step, the proposed neural renderer learns a binary shadow texture (canonical shadow) from the 2D foreground subject, for each point light source, independent of the background scene geometry. Next, the generated binary shadows are texture-mapped to transparent virtual shadow map planes which are seamlessly used in a traditional rendering pipeline to project hard or soft shadows for arbitrary scenes and light sources of different sizes. The neural renderer is trained with shadow images rendered from a fast, scalable, synthetic data generation framework. We introduce the 3D Virtual Human Shadow (3DVHshadow) dataset as a public benchmark for training and evaluation of human shadow generation. Evaluation on the 3DVHshadow test set and real 2D silhouette images of people demonstrates the proposed framework achieves comparable performance to traditional geometry-based renderers without any requirement for knowledge or computationally intensive, explicit estimation of the 3D human shape. We also show the benefit of learning intermediate canonical shadow textures, compared to learning to generate shadows directly in camera image space. Further experiments are provided to evaluate the effect of having multiple light sources in the scene, model performance with regard to the relative camera-light 2D angular distance, potential aliasing artefacts related to output image resolution, and effect of light sources' dimensions on shadow softness. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies -> Computer graphics; Neural networks | |
dc.subject | Computing methodologies | |
dc.subject | Computer graphics | |
dc.subject | Neural networks | |
dc.title | Learning Projective Shadow Textures for Neural Rendering of Human Cast Shadows from Silhouettes | en_US |
dc.description.seriesinformation | Eurographics Symposium on Rendering | |
dc.description.sectionheaders | Patterns and Shadows | |
dc.identifier.doi | 10.2312/sr.20231125 | |
dc.identifier.pages | 63-75 | |
dc.identifier.pages | 13 pages | |