Show simple item record

dc.contributor.authorSurace, Lucaen_US
dc.contributor.authorTursun, Caraen_US
dc.contributor.authorCelikcan, Ufuken_US
dc.contributor.authorDidyk, Piotren_US
dc.contributor.editorRitschel, Tobiasen_US
dc.contributor.editorWeidlich, Andreaen_US
dc.date.accessioned2023-06-27T06:41:59Z
dc.date.available2023-06-27T06:41:59Z
dc.date.issued2023
dc.identifier.isbn978-3-03868-229-5
dc.identifier.isbn978-3-03868-228-8
dc.identifier.issn1727-3463
dc.identifier.urihttps://doi.org/10.2312/sr.20231130
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/sr20231130
dc.description.abstractNew virtual reality headsets and wide field-of-view displays rely on foveated rendering techniques that lower the rendering quality for peripheral vision to increase performance without a perceptible quality loss. While the concept is simple, the practical realization of the foveated rendering systems and their full exploitation are still challenging. Existing techniques focus on modulating the spatial resolution of rendering or shading rate according to the characteristics of human perception. However, most rendering systems also have a significant cost related to geometry processing. In this work, we investigate the problem of mesh simplification, also known as the level of detail (LOD) technique, for foveated rendering. We aim to maximize the amount of LOD simplification while keeping the visibility of changes to the object geometry under a selected threshold. We first propose two perceptually inspired visibility models for mesh simplification suitable for gaze-contingent rendering. The first model focuses on spatial distortions in the object silhouette and body. The second model accounts for the temporal visibility of switching between two LODs. We calibrate the two models using data from perceptual experiments and derive a computational method that predicts a suitable LOD for rendering an object at a specific eccentricity without objectionable quality loss. We apply the technique to the foveated rendering of static and dynamic objects and demonstrate the benefits in a validation experiment. Using our perceptually-driven gaze-contingent LOD selection, we achieve up to 33% of extra speedup in rendering performance of complex-geometry scenes when combined with the most recent industrial solutions, i.e., Nanite from Unreal Engine.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies -> Perception; Virtual reality
dc.subjectComputing methodologies
dc.subjectPerception
dc.subjectVirtual reality
dc.titleGaze-Contingent Perceptual Level of Detail Predictionen_US
dc.description.seriesinformationEurographics Symposium on Rendering
dc.description.sectionheadersPerception
dc.identifier.doi10.2312/sr.20231130
dc.identifier.pages119-130
dc.identifier.pages12 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License