Show simple item record

dc.contributor.authorTewari, Ayushen_US
dc.contributor.authorThies, Justusen_US
dc.contributor.authorMildenhall, Benen_US
dc.contributor.authorSrinivasan, Pratulen_US
dc.contributor.authorTretschk, Edithen_US
dc.contributor.authorWang, Yifanen_US
dc.contributor.authorLassner, Christophen_US
dc.contributor.authorSitzmann, Vincenten_US
dc.contributor.authorMartin-Brualla, Ricardoen_US
dc.contributor.authorLombardi, Stephenen_US
dc.contributor.authorSimon, Tomasen_US
dc.contributor.authorTheobalt, Christianen_US
dc.contributor.authorNießner, Matthiasen_US
dc.contributor.authorBarron, Jon T.en_US
dc.contributor.authorWetzstein, Gordonen_US
dc.contributor.authorZollhöfer, Michaelen_US
dc.contributor.authorGolyanik, Vladislaven_US
dc.contributor.editorMeneveaux, Danielen_US
dc.contributor.editorPatanè, Giuseppeen_US
dc.date.accessioned2022-04-22T07:00:38Z
dc.date.available2022-04-22T07:00:38Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14507
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14507
dc.description.abstractSynthesizing photo-realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real-world observations. Neural rendering is a leap forward towards the goal of synthesizing photo-realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling nonrigidly deforming objects and scene editing and composition. While most of these approaches are scene-specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state-ofthe- art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleAdvances in Neural Renderingen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersState of the Art Reports
dc.description.volume41
dc.description.number2
dc.identifier.doi10.1111/cgf.14507
dc.identifier.pages703-735
dc.identifier.pages33 pages
dc.description.documenttypestar


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record