CP-NeRF: Conditionally Parameterized Neural Radiance Fields for Cross-scene Novel View Synthesis
Date
2023Author
He, Hao
Liang, Yixun
Xiao, Shishi
Chen, Jierun
Chen, Yingcong
Metadata
Show full item recordAbstract
Neural radiance fields (NeRF) have demonstrated a promising research direction for novel view synthesis. However, the existing approaches either require per-scene optimization that takes significant computation time or condition on local features which overlook the global context of images. To tackle this shortcoming, we propose the Conditionally Parameterized Neural Radiance Fields (CP-NeRF), a plug-in module that enables NeRF to leverage contextual information from different scales. Instead of optimizing the model parameters of NeRFs directly, we train a Feature Pyramid hyperNetwork (FPN) that extracts view-dependent global and local information from images within or across scenes to produce the model parameters. Our model can be trained end-to-end with standard photometric loss from NeRF. Extensive experiments demonstrate that our method can significantly boost the performance of NeRF, achieving state-of-the-art results in various benchmark datasets.
BibTeX
@article {10.1111:cgf.14940,
journal = {Computer Graphics Forum},
title = {{CP-NeRF: Conditionally Parameterized Neural Radiance Fields for Cross-scene Novel View Synthesis}},
author = {He, Hao and Liang, Yixun and Xiao, Shishi and Chen, Jierun and Chen, Yingcong},
year = {2023},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14940}
}
journal = {Computer Graphics Forum},
title = {{CP-NeRF: Conditionally Parameterized Neural Radiance Fields for Cross-scene Novel View Synthesis}},
author = {He, Hao and Liang, Yixun and Xiao, Shishi and Chen, Jierun and Chen, Yingcong},
year = {2023},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14940}
}