Show simple item record

dc.contributor.authorJaenicke, H.en_US
dc.contributor.authorBorgo, R.en_US
dc.contributor.authorMason, J. S. D.en_US
dc.contributor.authorChen, M.en_US
dc.date.accessioned2015-02-23T16:40:32Z
dc.date.available2015-02-23T16:40:32Z
dc.date.issued2010en_US
dc.identifier.issn1467-8659en_US
dc.identifier.urihttp://dx.doi.org/10.1111/j.1467-8659.2009.01605.xen_US
dc.description.abstractSound is an integral part of most movies and videos. In many situations, viewers of a video are unable to hear the sound track, for example, when watching it in a fast forward mode, viewing it by hearing-impaired viewers or when the plot is given as a storyboard. In this paper, we present an automated visualization solution to such problems. The system first detects the common components (such as music, speech, rain, explosions, and so on) from a sound track, then maps them to a collection of programmable visual metaphors, and generates a composite visualization. This form of sound visualization, which is referred to as SoundRiver, can be also used to augment various forms of video abstraction and annotated key frames and to enhance graphical user interfaces for video handling software. The SoundRiver conveys more semantic information to the viewer than traditional graphical representations of sound illustration, such as phonoautographs, spectrograms or artistic audiovisual animations.en_US
dc.publisherThe Eurographics Association and Blackwell Publishing Ltden_US
dc.titleSoundRiver: Semantically-Rich Sound Illustrationen_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume29en_US
dc.description.number2en_US
dc.identifier.doi10.1111/j.1467-8659.2009.01605.xen_US
dc.identifier.pages357-366en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record