dc.contributor.author | Cheng, Irene | en_US |
dc.contributor.author | Basu, Anup | en_US |
dc.contributor.author | Pan, Yixin | en_US |
dc.date.accessioned | 2015-11-12T07:55:16Z | |
dc.date.available | 2015-11-12T07:55:16Z | |
dc.date.issued | 2003 | en_US |
dc.identifier.issn | 1017-4656 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/egp.20031012 | en_US |
dc.description.abstract | Spatially varying sensing (foveation) has been used in many different areas of Computer Vision, such as image compression and video teleconferencing and in perceptually driven Level of Detail (LOD) representations in graphics. In this work, we show that foveation is advantageous for interactive mesh and texture transmission in online 3D applications. Unlike traditional mesh representations where all 3D vertices need to be transmitted, we only need to transmit a collection of points-of-interest (foveae) and information on only one (rather than three) axis. Thereby, we can achieve a threefold reduction in the amount of data that needs to be transmitted to represent a new 3D model. Our research differs from level of detail (LOD) based approaches using perceptually driven simplification in that (i) the mesh and texture resolutions vary smoothly and continuously in our approach compared to distinct levels of details in adjoining regions in other foveated or multiresolution LOD based methods; and (ii) the approach works for an integrated foveated texture and mesh representation. The current implementation extends our past research in image and video compression [1] and is restricted to regular grid mesh representation produced by 3D scanners. | en_US |
dc.publisher | Eurographics Association | en_US |
dc.title | Parametric Foveation for Progressive Texture and Model Transmission | en_US |
dc.description.seriesinformation | Eurographics 2003 - Posters | en_US |