dc.description.abstract | Synthesizing novel views from image data is a widely investigated topic in both
computer graphics and computer vision, and has many applications like stereo
or multi-view rendering for virtual reality, light field reconstruction, and image
post-processing. While image-based approaches have the advantage of reduced
computational load compared to classical model-based rendering, efficiency is still
a major concern. This thesis demonstrates how concepts and tools from artificial
intelligence can be used to increase the efficiency of image-based view synthesis
algorithms. In particular it is shown how machine learning can help to generate point
patterns useful for a variety of computer graphics tasks, how path planning can guide
image warping, how sparsity-enforcing optimization can lead to significant speedups
in interactive distribution effect rendering, and how probabilistic inference can be
used to perform real-time 2D-to-3D conversion. | en_US |