dc.contributor.author | Franke, Linus | en_US |
dc.contributor.author | Rückert, Darius | en_US |
dc.contributor.author | Fink, Laura | en_US |
dc.contributor.author | Stamminger, Marc | en_US |
dc.contributor.editor | Bermano, Amit H. | en_US |
dc.contributor.editor | Kalogerakis, Evangelos | en_US |
dc.date.accessioned | 2024-04-16T14:38:57Z | |
dc.date.available | 2024-04-16T14:38:57Z | |
dc.date.issued | 2024 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.15012 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf15012 | |
dc.description.abstract | Point-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of rendering quality and computational efficiency. However, also latest approaches in this domain are not without their shortcomings. 3D Gaussian Splatting [KKLD23] struggles when tasked with rendering highly detailed scenes, due to blurring and cloudy artifacts. On the other hand, ADOP [RFS22] can accommodate crisper images, but the neural reconstruction network decreases performance, it grapples with temporal instability and it is unable to effectively address large gaps in the point cloud. In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP. The fundamental concept behind our novel technique involves rasterizing points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This approach allows rendering arbitrarily large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image including detail beyond splat resolution. Importantly, our render pipeline is entirely differentiable, allowing for automatic optimization of both point sizes and positions. Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: https://lfranke.github.io/trips | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies -> Rendering; Image-based rendering; Reconstruction | |
dc.subject | Computing methodologies | |
dc.subject | Rendering | |
dc.subject | Image | |
dc.subject | based rendering | |
dc.subject | Reconstruction | |
dc.title | TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Real-time Neural Rendering | |
dc.description.volume | 43 | |
dc.description.number | 2 | |
dc.identifier.doi | 10.1111/cgf.15012 | |
dc.identifier.pages | 12 pages | |