DFR: Differentiable Function Rendering for Learning 3D Generation from Images
Abstract
Learning-based 3D generation is a popular research field in computer graphics. Recently, some works adapted implicit function defined by a neural network to represent 3D objects and have become the current state-of-the-art. However, training the network requires precise ground truth 3D data and heavy pre-processing, which is unrealistic. To tackle this problem, we propose the DFR, a differentiable process for rendering implicit function representation of 3D objects into 2D images. Briefly, our method is to simulate the physical imaging process by casting multiple rays through the image plane to the function space, aggregating all information along with each ray, and performing a differentiable shading according to every ray's state. Some strategies are also proposed to optimize the rendering pipeline, making it efficient both in time and memory to support training a network. With DFR, we can perform many 3D modeling tasks with only 2D supervision. We conduct several experiments for various applications. The quantitative and qualitative evaluations both demonstrate the effectiveness of our method.
BibTeX
@article {10.1111:cgf.14082,
journal = {Computer Graphics Forum},
title = {{DFR: Differentiable Function Rendering for Learning 3D Generation from Images}},
author = {Wu, Yunjie and Sun, Zhengxing},
year = {2020},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14082}
}
journal = {Computer Graphics Forum},
title = {{DFR: Differentiable Function Rendering for Learning 3D Generation from Images}},
author = {Wu, Yunjie and Sun, Zhengxing},
year = {2020},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14082}
}