Show simple item record

dc.contributor.authorChao, Xian Jinen_US
dc.contributor.authorLeung, Howarden_US
dc.contributor.editorDominik L. Michelsen_US
dc.contributor.editorSoeren Pirken_US
dc.date.accessioned2022-08-10T15:20:04Z
dc.date.available2022-08-10T15:20:04Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14646
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14646
dc.description.abstractMulti-person novel view synthesis aims to generate free-viewpoint videos for dynamic scenes of multiple persons. However, current methods require numerous views to reconstruct a dynamic person and only achieve good performance when only a single person is present in the video. This paper aims to reconstruct a multi-person scene with fewer views, especially addressing the occlusion and interaction problems that appear in the multi-person scene. We propose MP-NeRF, a practical method for multiperson novel view synthesis from sparse cameras without the pre-scanned template human models. We apply a multi-person SMPL template as the identity and human motion prior. Then we build a global latent code to integrate the relative observations among multiple people, so we could represent multiple dynamic people into multiple neural radiance representations from sparse views. Experiments on multi-person dataset MVMP show that our method is superior to other state-of-the-art methods.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleMP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Viewsen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersLearning
dc.description.volume41
dc.description.number8
dc.identifier.doi10.1111/cgf.14646
dc.identifier.pages317-325
dc.identifier.pages9 pages


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • 41-Issue 8
    ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2022

Show simple item record