A Deep Residual Network for Geometric Decontouring
Abstract
Grayscale images are intensively used to construct or represent geometric details in field of computer graphics. In practice, displacement mapping technique often allows an 8-bit grayscale image input to manipulate the position of vertices. Human eyes are insensitive to the change of intensity between consecutive gray levels, so a grayscale image only provides 256 levels of luminances. However, when the luminances are converted into geometric elements, certain artifacts such as false contours become obvious. In this paper, we formulate the geometric decontouring as a constrained optimization problem from a geometric perspective. Instead of directly solving this optimization problem, we propose a data-driven method to learn a residual mapping function. We design a Geometric DeContouring Network (GDCNet) to eliminate the false contours effectively. To this end, we adopt a ResNet-based network structure and a normal-based loss function. Extensive experimental results demonstrate that accurate reconstructions can be achieved effectively. Our method can be used as a relief compressed representation and enhance the traditional displacement mapping technique to augment 3D models with high-quality geometric details using grayscale images efficiently.
BibTeX
@article {10.1111:cgf.14124,
journal = {Computer Graphics Forum},
title = {{A Deep Residual Network for Geometric Decontouring}},
author = {Ji, Zhongping and Zhou, Chengqin and Zhang, Qiankan and Zhang, Yu-Wei and Wang, Wenping},
year = {2020},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14124}
}
journal = {Computer Graphics Forum},
title = {{A Deep Residual Network for Geometric Decontouring}},
author = {Ji, Zhongping and Zhou, Chengqin and Zhang, Qiankan and Zhang, Yu-Wei and Wang, Wenping},
year = {2020},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14124}
}