dc.contributor.author | Kim, Sang Min | en_US |
dc.contributor.author | Choi, Changwoon | en_US |
dc.contributor.author | Heo, Hyeongjun | en_US |
dc.contributor.author | Kim, Young Min | en_US |
dc.contributor.editor | Chaine, Raphaëlle | en_US |
dc.contributor.editor | Deng, Zhigang | en_US |
dc.contributor.editor | Kim, Min H. | en_US |
dc.date.accessioned | 2023-10-09T07:34:01Z | |
dc.date.available | 2023-10-09T07:34:01Z | |
dc.date.issued | 2023 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14931 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14931 | |
dc.description.abstract | The advancements of the Neural Radiance Field (NeRF) and its variants have demonstrated remarkable capabilities in generating photo-realistic novel views from a small set of input images. While recent works suggest various techniques and model architectures that enhance speed or reconstruction quality, little attention is paid to exploring the RGB color space of input images. In this paper, we propose a universal color transform module that can maximally harness the captured evidence for the neural networks at hand. The color transform module utilizes an encoder-decoder framework that maps the RGB color space into a new latent space, enhancing the expressiveness of the input domain. We attach the encoder and the decoder at the input and output of a NeRF model of choice, respectively, and jointly optimize them to maintain the cycle consistency of the proposed transform, in addition to minimizing the reconstruction errors in the feature domain. Our comprehensive experiments demonstrate that the learned color space can significantly improve the quality of reconstructions compared to the conventional RGB representation. Its benefits are particularly pronounced in challenging scenarios characterized by low-light environments and scenes with low-textured regions. The proposed color transform pushes the boundaries of limitations in the input domain and offers a promising avenue for advancing the reconstruction capabilities of various neural representations. Source code is available at https://github.com/sangminkim-99/ColorTransformModule. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | CCS Concepts: Computing methodologies -> Reconstruction; Rendering | |
dc.subject | Computing methodologies | |
dc.subject | Reconstruction | |
dc.subject | Rendering | |
dc.title | Robust Novel View Synthesis with Color Transform Module | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Neural Rendering | |
dc.description.volume | 42 | |
dc.description.number | 7 | |
dc.identifier.doi | 10.1111/cgf.14931 | |
dc.identifier.pages | 14 pages | |