dc.contributor.author | Le, Hoang | en_US |
dc.contributor.author | Liu, Feng | en_US |
dc.contributor.editor | Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon | en_US |
dc.date.accessioned | 2019-10-14T05:09:39Z | |
dc.date.available | 2019-10-14T05:09:39Z | |
dc.date.issued | 2019 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13860 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13860 | |
dc.description.abstract | Novel view synthesis from sparse and unstructured input views faces challenges like the difficulty with dense 3D reconstruction and large occlusion. This paper addresses these problems by estimating proper appearance flows from the target to input views to warp and blend the input views. Our method first estimates a sparse set 3D scene points using an off-the-shelf 3D reconstruction method and calculates sparse flows from the target to input views. Our method then performs appearance flow completion to estimate the dense flows from the corresponding sparse ones. Specifically, we design a deep fully convolutional neural network that takes sparse flows and input views as input and outputs the dense flows. Furthermore, we estimate the optical flows between input views as references to guide the estimation of dense flows between the target view and input views. Besides the dense flows, our network also estimates the masks to blend multiple warped inputs to render the target view. Experiments on the KITTI benchmark show that our method can generate high quality novel views from sparse and unstructured input views. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.title | Appearance Flow Completion for Novel View Synthesis | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Image Based Rendering | |
dc.description.volume | 38 | |
dc.description.number | 7 | |
dc.identifier.doi | 10.1111/cgf.13860 | |
dc.identifier.pages | 555-565 | |