dc.contributor.author | Doukakis, E. | en_US |
dc.contributor.author | Debattista, K. | en_US |
dc.contributor.author | Harvey, C. | en_US |
dc.contributor.author | Bashford‐Rogers, T. | en_US |
dc.contributor.author | Chalmers, A. | en_US |
dc.contributor.editor | Chen, Min and Benes, Bedrich | en_US |
dc.date.accessioned | 2018-04-05T12:48:38Z | |
dc.date.available | 2018-04-05T12:48:38Z | |
dc.date.issued | 2018 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | http://dx.doi.org/10.1111/cgf.13258 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13258 | |
dc.description.abstract | Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics; however, as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. | en_US |
dc.publisher | © 2018 The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | multi‐modal | |
dc.subject | cross‐modal | |
dc.subject | bi‐modal | |
dc.subject | sound | |
dc.subject | graphics | |
dc.subject | I.3.3 [Computer Graphics]: Picture/Image GenerationViewing Algorithms I.4.8 [Computer Graphics]: Image Processing and Computer VisionScene Analysis | |
dc.title | Audiovisual Resource Allocation for Bimodal Virtual Environments | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Articles | |
dc.description.volume | 37 | |
dc.description.number | 1 | |
dc.identifier.doi | 10.1111/cgf.13258 | |
dc.identifier.pages | 172-183 | |