dc.contributor.author | Zhang, Cha | en_US |
dc.contributor.author | Chen, Tsuhan | en_US |
dc.contributor.editor | Alexander Keller and Henrik Wann Jensen | en_US |
dc.date.accessioned | 2014-01-27T14:30:28Z | |
dc.date.available | 2014-01-27T14:30:28Z | |
dc.date.issued | 2004 | en_US |
dc.identifier.isbn | 3-905673-12-6 | en_US |
dc.identifier.issn | 1727-3463 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/EGWR/EGSR04/243-254 | en_US |
dc.description.abstract | This paper presents a self-reconfigurable camera array system that captures video sequences from an array of mobile cameras, renders novel views on the fly and reconfigures the camera positions to achieve better rendering quality. The system is composed of 48 cameras mounted on mobile platforms. The contribution of this paper is twofold. First, we propose an efficient algorithm that is capable of rendering high-quality novel views from the captured images. The algorithm reconstructs a view-dependent multi-resolution 2D mesh model of the scene geometry on the fly and uses it for rendering. The algorithm combines region of interest (ROI) identification, JPEG image decompression, lens distortion correction, scene geometry reconstruction and novel view synthesis seamlessly on a single Intel Xeon 2.4 GHz processor, which is capable of generating novel views at 4-10 frames per second (fps). Second, we present a view-dependent adaptive capturing scheme that moves the cameras in order to show even better rendering results. Such camera reconfiguration naturally leads to a nonuniform arrangement of the cameras on the camera plane, which is both view-dependent and scene-dependent. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | A Self-Reconfigurable Camera Array | en_US |
dc.description.seriesinformation | Eurographics Workshop on Rendering | en_US |