Show simple item record

dc.contributor.authorLee, Hanhungen_US
dc.contributor.authorSavva, Manolisen_US
dc.contributor.authorChang, Angel Xuanen_US
dc.contributor.editorAristidou, Andreasen_US
dc.contributor.editorMacdonnell, Rachelen_US
dc.date.accessioned2024-04-16T15:45:21Z
dc.date.available2024-04-16T15:45:21Z
dc.date.issued2024
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.15061
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf15061
dc.description.abstractRecent years have seen an explosion of work and interest in text-to-3D shape generation. Much of the progress is driven by advances in 3D representations, large-scale pretraining and representation learning for text and image data enabling generative AI models, and differentiable rendering. Computational systems that can perform text-to-3D shape generation have captivated the popular imagination as they enable non-expert users to easily create 3D content directly from text. However, there are still many limitations and challenges remaining in this problem space. In this state-of-the-art report, we provide a survey of the underlying technology and methods enabling text-to-3D shape generation to summarize the background literature. We then derive a systematic categorization of recent work on text-to-3D shape generation based on the type of supervision data required. Finally, we discuss limitations of the existing categories of methods, and delineate promising directions for future work.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleText-to-3D Shape Generationen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersState of the Art Reports
dc.description.volume43
dc.description.number2
dc.identifier.doi10.1111/cgf.15061
dc.identifier.pages27 pages
dc.description.documenttypestar


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record