Show simple item record

dc.contributor.authorDoukakis, E.en_US
dc.contributor.authorDebattista, K.en_US
dc.contributor.authorHarvey, C.en_US
dc.contributor.authorBashford‐Rogers, T.en_US
dc.contributor.authorChalmers, A.en_US
dc.contributor.editorChen, Min and Benes, Bedrichen_US
dc.date.accessioned2018-04-05T12:48:38Z
dc.date.available2018-04-05T12:48:38Z
dc.date.issued2018
dc.identifier.issn1467-8659
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.13258
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13258
dc.description.abstractFidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics; however, as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli.en_US
dc.publisher© 2018 The Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectmulti‐modal
dc.subjectcross‐modal
dc.subjectbi‐modal
dc.subjectsound
dc.subjectgraphics
dc.subjectI.3.3 [Computer Graphics]: Picture/Image GenerationViewing Algorithms I.4.8 [Computer Graphics]: Image Processing and Computer VisionScene Analysis
dc.titleAudiovisual Resource Allocation for Bimodal Virtual Environmentsen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersArticles
dc.description.volume37
dc.description.number1
dc.identifier.doi10.1111/cgf.13258
dc.identifier.pages172-183


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record