dc.contributor.author | Petit, Antoine | en_US |
dc.contributor.author | Haouchine, Nazim | en_US |
dc.contributor.author | Roy, Frederick | en_US |
dc.contributor.author | Goldman, Daniel B. | en_US |
dc.contributor.author | Cotin, Stephane | en_US |
dc.contributor.editor | Vidal, Franck P. and Tam, Gary K. L. and Roberts, Jonathan C. | en_US |
dc.date.accessioned | 2019-09-11T05:08:58Z | |
dc.date.available | 2019-09-11T05:08:58Z | |
dc.date.issued | 2019 | |
dc.identifier.isbn | 978-3-03868-096-3 | |
dc.identifier.uri | https://doi.org/10.2312/cgvc.20191255 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/cgvc20191255 | |
dc.description.abstract | We present Deformed Reality, a new way of interacting with an augmented reality environment by manipulating 3D objects in an intuitive and physically-consistent manner. Using the core principle of augmented reality to estimate rigid pose over time, our method makes it possible for the user to deform the targeted object while it is being rendered with its natural texture, giving the sense of a interactive scene editing. Our framework follows a computationally efficient pipeline that uses a proxy CAD model for both pose computation, physically-based manipulations and scene appearance estimation. The final composition is built upon a continuous image completion and re-texturing process to preserve visual consistency. The presented results show that our method can open new ways of using augmented reality by not only augmenting the environment but also interacting with objects intuitively. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Deformed Reality | en_US |
dc.description.seriesinformation | Computer Graphics and Visual Computing (CGVC) | |
dc.description.sectionheaders | Virtual Reality | |
dc.identifier.doi | 10.2312/cgvc.20191255 | |
dc.identifier.pages | 27-34 | |