dc.contributor.author | Petitjean, Automne | en_US |
dc.contributor.author | Poirier-Ginter, Yohan | en_US |
dc.contributor.author | Tewari, Ayush | en_US |
dc.contributor.author | Cordonnier, Guillaume | en_US |
dc.contributor.author | Drettakis, George | en_US |
dc.contributor.editor | Ritschel, Tobias | en_US |
dc.contributor.editor | Weidlich, Andrea | en_US |
dc.date.accessioned | 2023-06-27T07:03:35Z | |
dc.date.available | 2023-06-27T07:03:35Z | |
dc.date.issued | 2023 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14888 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14888 | |
dc.description.abstract | Recent advances in Neural Radiance Fields enable the capture of scenes with motion. However, editing the motion is hard; no existing method allows editing beyond the space of motion existing in the original video, nor editing based on physics. We present the first approach that allows physically-based editing of motion in a scene captured with a single hand-held video camera, containing vibrating or periodic motion. We first introduce a Lagrangian representation, representing motion as the displacement of particles, which is learned while training a radiance field. We use these particles to create a continuous representation of motion over the sequence, which is then used to perform a modal analysis of the motion thanks to a Fourier transform on the particle displacement over time. The resulting extracted modes allow motion synthesis, and easy editing of the motion, while inheriting the ability for free-viewpoint synthesis in the captured 3D scene from the radiance field.We demonstrate our new method on synthetic and real captured scenes. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.title | ModalNeRF: Neural Modal Analysis and Synthesis for Free-Viewpoint Navigation in Dynamically Vibrating Scenes | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | NeRF | |
dc.description.volume | 42 | |
dc.description.number | 4 | |
dc.identifier.doi | 10.1111/cgf.14888 | |
dc.identifier.pages | 13 pages | |