dc.description.abstract | In the context of content creation, there is an increasing demand for high-quality digital models including object shape, texture, environment illumination, physical properties, etc. As design and pre-view presentations get exclusively digital, the need for high-quality 3D assets has grown sharply. The demand, however, is challenging to meet as the process of creating such digital 3D assets remains mostly manual – heavy post-processing is still needed to clean-up captures from commercial 3D capturing devices or models have to be manually created from scratch. On the other hand, low-quality 3D data is much easier to obtain, e.g., modeled by hand, captured with a low-end device, or generated using a virtual simulator. In this thesis, we develop algorithms that consume such low-quality 3D data and 2D cues to automatically create enriched 3D content of higher-quality. Specifically, with the help of low quality underlying 3D geometry, we explore (i) how to grab 3D shape from 2D images while factorizing camera motion and object motion in a dynamic scene; (ii) how to transfer texture and illumination from a captured 2D image to 3D shapes of the same category; (iii) how to decompose 360 environment map and BRDF material from photos and reduce ambiguity by joint observation; and (iv) how to model 3D garment shape and its physical properties from a 2d sketch or image. | en_US |