Show simple item record

dc.contributor.authorZhang, Chaen_US
dc.contributor.authorChen, Tsuhanen_US
dc.contributor.editorAlexander Keller and Henrik Wann Jensenen_US
dc.date.accessioned2014-01-27T14:30:28Z
dc.date.available2014-01-27T14:30:28Z
dc.date.issued2004en_US
dc.identifier.isbn3-905673-12-6en_US
dc.identifier.issn1727-3463en_US
dc.identifier.urihttp://dx.doi.org/10.2312/EGWR/EGSR04/243-254en_US
dc.description.abstractThis paper presents a self-reconfigurable camera array system that captures video sequences from an array of mobile cameras, renders novel views on the fly and reconfigures the camera positions to achieve better rendering quality. The system is composed of 48 cameras mounted on mobile platforms. The contribution of this paper is twofold. First, we propose an efficient algorithm that is capable of rendering high-quality novel views from the captured images. The algorithm reconstructs a view-dependent multi-resolution 2D mesh model of the scene geometry on the fly and uses it for rendering. The algorithm combines region of interest (ROI) identification, JPEG image decompression, lens distortion correction, scene geometry reconstruction and novel view synthesis seamlessly on a single Intel Xeon 2.4 GHz processor, which is capable of generating novel views at 4-10 frames per second (fps). Second, we present a view-dependent adaptive capturing scheme that moves the cameras in order to show even better rendering results. Such camera reconfiguration naturally leads to a nonuniform arrangement of the cameras on the camera plane, which is both view-dependent and scene-dependent.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleA Self-Reconfigurable Camera Arrayen_US
dc.description.seriesinformationEurographics Workshop on Renderingen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record