Show simple item record

dc.contributor.authorHuang, Taoen_US
dc.contributor.authorSong, Yadongen_US
dc.contributor.authorGuo, Jieen_US
dc.contributor.authorTao, Chengzhien_US
dc.contributor.authorZong, Zijingen_US
dc.contributor.authorFu, Xihaoen_US
dc.contributor.authorLi, Hongshanen_US
dc.contributor.authorGuo, Yanwenen_US
dc.contributor.editorUmetani, Nobuyukien_US
dc.contributor.editorWojtan, Chrisen_US
dc.contributor.editorVouga, Etienneen_US
dc.date.accessioned2022-10-04T06:41:03Z
dc.date.available2022-10-04T06:41:03Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14675
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14675
dc.description.abstractReal-time global illumination is a highly desirable yet challenging task in computer graphics. Existing works well solving this problem are mostly based on some kind of precomputed data (caches), while the final results depend significantly on the quality of the caches. In this paper, we propose a learning-based pipeline that can reproduce a wide range of complex light transport phenomena, including high-frequency glossy interreflection, at any viewpoint in real time (> 90 frames per-second), using information from imperfect caches stored at the barycentre of every triangle in a 3D scene. These caches are generated at a precomputation stage by a physically-based offline renderer at a low sampling rate (e.g., 32 samples per-pixel) and a low image resolution (e.g., 64×16). At runtime, a deep radiance reconstruction method based on a dedicated neural network is then involved to reconstruct a high-quality radiance map of full global illumination at any viewpoint from these imperfect caches, without introducing noise and aliasing artifacts. To further improve the reconstruction accuracy, a new feature fusion strategy is designed in the network to better exploit useful contents from cheap G-buffers generated at runtime. The proposed framework ensures high-quality rendering of images for moderate-sized scenes with full global illumination effects, at the cost of reasonable precomputation time. We demonstrate the effectiveness and efficiency of the proposed pipeline by comparing it with alternative strategies, including real-time path tracing and precomputed radiance transfer.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies → Ray tracing; Neural networks
dc.subjectComputing methodologies → Ray tracing
dc.subjectNeural networks
dc.titleReal-time Deep Radiance Reconstruction from Imperfect Cachesen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersRendering - Modeling Nature and Material
dc.description.volume41
dc.description.number7
dc.identifier.doi10.1111/cgf.14675
dc.identifier.pages267-278
dc.identifier.pages12 pages


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • 41-Issue 7
    Pacific Graphics 2022 - Symposium Proceedings

Show simple item record