dc.description.abstract | Computer-generated imagery is ubiquitous, spanning fields such as games and movies, architecture, engineering, or virtual prototyping, while also helping create novel ones such as computational materials. With the increase in computational power and the improvement of acquisition techniques, there has been a paradigm shift in the field towards data-driven techniques, which has yielded an unprecedented level of realism in visual appearance. Unfortunately, this leads to a series of problems. First, there is a disconnect between the mathematical representation of the data and any meaningful parameters that humans understand; the captured data is machine-friendly, but not human-friendly. Second, the many different acquisition systems lead to heterogeneous formats and very large datasets. And third, real-world appearance functions are usually nonlinear and high-dimensional. As a result, visual appearance datasets are increasingly unfit to editing operations, which limits the creative process for scientists, engineers, artists, and practitioners in general. There is an immense gap between the complexity, realism and richness of the captured data, and the flexibility to edit such data. The current research path leads to a fragmented space of isolated solutions, each tailored to a particular dataset and problem. To define intuitive and predictable editing spaces, algorithms, and workflows, we must investigate at the theoretical, algorithmic and application levels, putting the user at the core, learning key relevant appearance features in terms humans understand. | en_US |