Show simple item record

dc.contributor.authorSerrano, Ana
dc.date.accessioned2019-11-27T16:31:00Z
dc.date.available2019-11-27T16:31:00Z
dc.date.issued2019-04-29
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/2632859
dc.description.abstractVisual computing is a recently coined term that embraces many subfields in computer science related to the acquisition, analysis, or synthesis of visual data through the use of computer resources. What brings all these fields together is that they are all related to the visual aspects of computing, and more importantly, that during the last years they have started to share similar goals and methods. This thesis presents contributions in three different areas within the field of visual computing: computational imaging, material appearance, and virtual reality. The first part of this thesis is devoted to computational imaging, and in particular to rich image and video acquisition. First, we deal with the capture of high dynamic range images in a single shot, where we propose a novel reconstruction algorithm based on sparse coding and reconstruction to recover the full range of luminances of the scene being captured from a single coded low dynamic range image. Second, we focus on the temporal domain, where we propose to capture high speed videos via a novel reconstruction algorithm, again based on sparse coding, that allows recovering high speed video sequences from a single photograph with encoded temporal information. The second part attempts to address the long-standing problem of visual perception and editing of real world materials. We propose an intuitive, perceptually based editing space for captured data. We derive a set of meaningful attributes for describing appearance, and we build a control space based on these attributes by means of a large scale user study. Finally, we propose a series of applications for this space. One of these applications to which we devote particular attention is gamut mapping. The range of appearances displayable on a particular display or printer is called the gamut. Given a desired appearance, that may lie outside of that gamut, the process of gamut mapping consists on making it displayable without excessively distorting the final perceived appearance. For this task, we make use of our previously derived perceptually-based space to introduce visual perception in the mapping process to help minimize the perceived visual distortions that may arise during the mapping process. The third part is devoted to virtual reality. We first focus on the study of human gaze behavior in static omnistereo panoramas. We collect gaze samples and we provide an analysis of this data, proposing then a series of applications that make use of our derived insights. Then, we investigate more intricate behaviors in dynamic environments under a cinematographic context. We gather gaze data from viewers watching virtual reality videos containing different edits with varying parameters, and provide the first systematic analysis of viewers’ behavior and the perception of continuity in virtual reality video. Finally, we propose a novel method for adding parallax for 360◦ video visualization in virtual reality headsets.en_US
dc.description.sponsorshipThis work has been funded by the European Research Council (project CHAMELEON), the Spanish Ministry of Economy and Competitiveness (projects TIN2013-41857-P, TIN2014-61696-EXP, TIN2016-79710-P, and TIN2016-78753-P), the Max-Planck Institute for Informatics, Adobe Research, and the Nvidia’s Graduate Fellowship program.en_US
dc.language.isoenen_US
dc.publisherUniversidad de Zaragozaen_US
dc.subjectcomputational imagingen_US
dc.subjectmaterial appearanceen_US
dc.subjectvirtual realityen_US
dc.subjectapplied perceptionen_US
dc.titleAdvances on computational imaging, material appearance, and virtual realityen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record