Neural Adaptive Scene Tracing (NAScenT)
Date
2022Author
Li, Rui
Rückert, Darius
Wang, Yuanhao
Idoughi, Ramzi
Heidrich, Wolfgang
Metadata
Show full item recordAbstract
Neural rendering with implicit neural networks has recently emerged as an attractive proposition for scene reconstruction, achieving excellent quality albeit at high computational cost. While the most recent generation of such methods has made progress on the rendering (inference) times, very little progress has been made on improving the reconstruction (training) times. In this work we present Neural Adaptive Scene Tracing (NAScenT ), that directly trains a hybrid explicit-implicit neural representation. NAScenT uses a hierarchical octree representation with one neural network per leaf node and combines this representation with a two-stage sampling process that concentrates ray samples where they matter most - near object surfaces. As a result, NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments, as well as small scenes with high geometric complexity. NAScenT outperforms existing neural rendering approaches in terms of both quality and training time.
BibTeX
@inproceedings {10.2312:vmv.20221199,
booktitle = {Vision, Modeling, and Visualization},
editor = {Bender, Jan and Botsch, Mario and Keim, Daniel A.},
title = {{Neural Adaptive Scene Tracing (NAScenT)}},
author = {Li, Rui and Rückert, Darius and Wang, Yuanhao and Idoughi, Ramzi and Heidrich, Wolfgang},
year = {2022},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-189-2},
DOI = {10.2312/vmv.20221199}
}
booktitle = {Vision, Modeling, and Visualization},
editor = {Bender, Jan and Botsch, Mario and Keim, Daniel A.},
title = {{Neural Adaptive Scene Tracing (NAScenT)}},
author = {Li, Rui and Rückert, Darius and Wang, Yuanhao and Idoughi, Ramzi and Heidrich, Wolfgang},
year = {2022},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-189-2},
DOI = {10.2312/vmv.20221199}
}