Show simple item record

dc.contributor.authorWang, Yangtuanfeng
dc.date.accessioned2020-01-22T13:37:27Z
dc.date.available2020-01-22T13:37:27Z
dc.date.issued2019-06-28
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/2632871
dc.description.abstractIn the context of content creation, there is an increasing demand for high-quality digital models including object shape, texture, environment illumination, physical properties, etc. As design and pre-view presentations get exclusively digital, the need for high-quality 3D assets has grown sharply. The demand, however, is challenging to meet as the process of creating such digital 3D assets remains mostly manual – heavy post-processing is still needed to clean-up captures from commercial 3D capturing devices or models have to be manually created from scratch. On the other hand, low-quality 3D data is much easier to obtain, e.g., modeled by hand, captured with a low-end device, or generated using a virtual simulator. In this thesis, we develop algorithms that consume such low-quality 3D data and 2D cues to automatically create enriched 3D content of higher-quality. Specifically, with the help of low quality underlying 3D geometry, we explore (i) how to grab 3D shape from 2D images while factorizing camera motion and object motion in a dynamic scene; (ii) how to transfer texture and illumination from a captured 2D image to 3D shapes of the same category; (iii) how to decompose 360 environment map and BRDF material from photos and reduce ambiguity by joint observation; and (iv) how to model 3D garment shape and its physical properties from a 2d sketch or image.en_US
dc.language.isoenen_US
dc.publisherUniversity College Londonen_US
dc.titleGenerating High-quality 3D Assets from Easy-to-access 2D contenten_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record