MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views
Abstract
Multi-person novel view synthesis aims to generate free-viewpoint videos for dynamic scenes of multiple persons. However, current methods require numerous views to reconstruct a dynamic person and only achieve good performance when only a single person is present in the video. This paper aims to reconstruct a multi-person scene with fewer views, especially addressing the occlusion and interaction problems that appear in the multi-person scene. We propose MP-NeRF, a practical method for multiperson novel view synthesis from sparse cameras without the pre-scanned template human models. We apply a multi-person SMPL template as the identity and human motion prior. Then we build a global latent code to integrate the relative observations among multiple people, so we could represent multiple dynamic people into multiple neural radiance representations from sparse views. Experiments on multi-person dataset MVMP show that our method is superior to other state-of-the-art methods.
BibTeX
@article {10.1111:cgf.14646,
journal = {Computer Graphics Forum},
title = {{MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views}},
author = {Chao, Xian Jin and Leung, Howard},
year = {2022},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14646}
}
journal = {Computer Graphics Forum},
title = {{MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views}},
author = {Chao, Xian Jin and Leung, Howard},
year = {2022},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14646}
}