Show simple item record

dc.contributor.authorRodriguez-Pardo, Carlos
dc.date.accessioned2023-08-01T09:10:29Z
dc.date.available2023-08-01T09:10:29Z
dc.date.issued2023-07
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/3543873
dc.description.abstractRealistic virtual scenes are becoming increasingly prevalent in our society, with a wide range of applications in areas such as manufacturing, architecture, fashion design, and entertainment, including movies, video games, and augmented and virtual reality. Generating realistic images of such scenes requires highly accurate illumination, geometry, and material models, which can be time-consuming and challenging to obtain. Traditionally, such models have often been created manually by skilled artists, but this process can be prohibitively time-consuming and costly. Alternatively, real-world examples can be captured, but this approach presents additional challenges in terms of accuracy and scalability. Moreover, while realism and accuracy are crucial in such processes, rendering efficiency is also a key requirement, so that lifelike images can be generated with the speed required in many real-world applications. One of the most significant challenges in this regard is the acquisition and representation of materials, which are a critical component of our visual world and, by extension, of virtual representations of it. However, existing approaches for material acquisition and representation are limited in terms of efficiency and accuracy, which limits their real-world impact. To address these challenges, data-driven approaches that leverage machine learning may provide viable solutions. Nevertheless, designing and training machine learning models that meet all these competing requirements remains a challenging task, requiring careful consideration of trade-offs between quality and efficiency. In this thesis, we propose novel learning-based solutions to address several key challenges in physically-based rendering and material digitization. Our approach leverages various forms of neural networks to introduce innovative algorithms for radiance encoding, digital material generation, edition, and estimation. First, we present a visual attribute transfer framework for digital materials that can effectively generalize to new illumination conditions and geometric distortions. We showcase a use-case of this method for high-resolution material acquisition using a custom device. Additionally, we propose a generative model capable of synthesizing tileable textures from a single input image, which helps improve the quality of material rendering. Building upon recent work in neural fields, we also introduce a material representation that accurately encodes material reflectance while offering powerful editing and propagation capabilities. In addition to reflectance, we present a novel method for global illumination encoding that leverages carefully designed generative models to achieve significantly faster sampling than previous work. Finally, we propose two innovative methods for low-cost material digitization. With flatbed scanners as our capture device, we present a generative model that can provide high-resolution material reflectance estimations using a single image as input, while introducing an uncertainty quantification algorithm that increases its reliability and efficiency. Additionally, we present a novel method for digitizing fabric mechanical properties using depth images as input, which we extend with a perceptually-validated drape similarity metric. Overall, the contributions of this thesis represent significant advances in the fields of radiance encoding and digital material acquisition and edition, enhancing the quality, scalability, and efficiency of physically-based rendering pipelines.en_US
dc.description.sponsorshipThesis fully funded by SEDDI.en_US
dc.language.isoen_USen_US
dc.subjectDeep Learningen_US
dc.subjectDigital Materialsen_US
dc.subjectReflectanceen_US
dc.subjectRadianceen_US
dc.subjectMachine Learningen_US
dc.subjectNeural Networksen_US
dc.subjectMaterial Acquisitionen_US
dc.subjectBRDFen_US
dc.subjectGlobal Illuminationen_US
dc.subjectGenerative Modelsen_US
dc.subjectFabricsen_US
dc.subjectAttribute Transferen_US
dc.subjectTexture Synthesisen_US
dc.subjectIntrinsic Decompositionen_US
dc.titleNeural Networks for Digital Materials and Radiance Encodingen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record