Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network
Date
2020Metadata
Show full item recordAbstract
We want to recreate spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image. Pro- ducing these SVBRDFs from single images will allow designers to incorporate many new materials in their virtual scenes, increasing their realism. A single image contains incomplete information about the SVBRDF, making reconstruction difficult. Existing algorithms can produce high-quality SVBRDFs with single or few input photographs using supervised deep learning. The learning step relies on a huge dataset with both input photographs and the ground truth SVBRDF maps. This is a weakness as ground truth maps are not easy to acquire. For practical use, it is also important to produce large SVBRDF maps. Existing algorithms rely on a separate texture synthesis step to generate these large maps, which leads to the loss of consistency be- tween generated SVBRDF maps. In this paper, we address both issues simultaneously. We present an unsupervised generative adversarial neural network that addresses both SVBRDF capture from a single image and synthesis at the same time. From a low-resolution input image, we generate a large resolution SVBRDF, much larger than the input images. We train a generative adversarial network (GAN) to get SVBRDF maps, which have both a large spatial extent and detailed texels. We employ a two-stream generator that divides the training of maps into two groups (normal and roughness as one, diffuse and specular as the other) to better optimize those four maps. In the end, our method is able to generate high-quality large scale SVBRDF maps from a single input photograph with repetitive structures and provides higher quality rendering results with more details compared to the previous works. Each input for our method requires individual training, which costs about 3 hours.
BibTeX
@inproceedings {10.2312:sr.20201136,
booktitle = {Eurographics Symposium on Rendering - DL-only Track},
editor = {Dachsbacher, Carsten and Pharr, Matt},
title = {{Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network}},
author = {Zhao, Yezi and Wang, Beibei and Xu, Yanning and Zeng, Zheng and Wang, Lu and Holzschuch, Nicolas},
year = {2020},
publisher = {The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-117-5},
DOI = {10.2312/sr.20201136}
}
booktitle = {Eurographics Symposium on Rendering - DL-only Track},
editor = {Dachsbacher, Carsten and Pharr, Matt},
title = {{Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network}},
author = {Zhao, Yezi and Wang, Beibei and Xu, Yanning and Zeng, Zheng and Wang, Lu and Holzschuch, Nicolas},
year = {2020},
publisher = {The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-117-5},
DOI = {10.2312/sr.20201136}
}