Audio and Visual Rendering with Perceptual Foundations
View/ Open
Date
2009-09-15
Item/paper (currently) not available via TIB Hannover.
Metadata
Show full item recordAbstract
Realistic visual and audio rendering still remains a technical challenge. Indeed, typicalcomputers do not cope with the increasing complexity of today's virtual environments, bothfor audio and visuals, and the graphic design of such scenes require talented artists.In the first part of this thesis, we focus on audiovisual rendering algorithms for complexvirtual environments which we improve using human perception of combined audioand visual cues. In particular, we developed a full perceptual audiovisual rendering engineintegrating an efficient impact sounds rendering improved by using our perception ofaudiovisual simultaneity, a way to cluster sound sources using human's spatial tolerancebetween a sound and its visual representation, and a combined level of detail mechanismfor both audio and visuals varying the impact sounds quality and the visually rendered materialquality of the objects. All our crossmodal effects were supported by the prior workin neuroscience and demonstrated using our own experiments in virtual environments.In a second part, we use information present in photographs in order to guide a visualrendering. We thus provide two different tools to assist casual artists such as gamers, orengineers. The first extracts the visual hair appearance from a photograph thus allowingthe rapid customization of avatars in virtual environments. The second allows for a fastpreviewing of 3D scenes reproducing the appearance of an input photograph following auser's 3D sketch.We thus propose a first step toward crossmodal audiovisual rendering algorithms anddevelop practical tools for non expert users to create virtual worlds using photograph'sappearance.