dc.description.abstract | Structured light scanning is ubiquituous in 3D acquisition. It is capable of capturing high geometric detail at a low cost under a variety of challenging scene conditions. Recent methods have demonstrated robustness in the presence of artifacts due to global illumination, such as inter-reflections and sub-surface scattering, as well as imperfections caused by projector defocus. For comparing approaches, however, the quantitative evaluation of structured lighting schemes is hindered by the challenges in obtaining ground truth data, resulting in a poor understanding for these methods across a wide range of shapes, materials, and lighting configurations. In this paper, we present a benchmark to study the performance of structured lighting algorithms in the presence of errors caused due to the above properties of the scene. In order to do this, we construct a synthetic structured lighting scanner that uses advanced physically based rendering techniques to simulate the point cloud acquisition process. We show that, under conditions similar to that of a real scanner, our synthetic scanner replicates the same artifacts found in the output of a real scanner. Using this synthetic scanner, we perform a quantitative evaluation of four different structured lighting techniques - gray-code patterns, micro-phase shifting, ensemble codes, and unstructured light scanning. The evaluation, performed on a variety of scenes,demonstrate that no one method is capable of adequately handling all sources of error - each method is appropriate for addressing distinct sources of error. | en_US |