Show simple item record

dc.contributor.authorMemery, Seanen_US
dc.contributor.authorCedron, Osmaren_US
dc.contributor.authorSubr, Karticen_US
dc.contributor.editorChaine, Raphaëlleen_US
dc.contributor.editorDeng, Zhigangen_US
dc.contributor.editorKim, Min H.en_US
dc.date.accessioned2023-10-09T07:37:40Z
dc.date.available2023-10-09T07:37:40Z
dc.date.issued2023
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14980
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14980
dc.description.abstractArtistic authoring of 3D environments is a laborious enterprise that also requires skilled content creators. There have been impressive improvements in using machine learning to address different aspects of generating 3D content, such as generating meshes, arranging geometry, synthesizing textures, etc. In this paper we develop a model to generate Bidirectional Reflectance Distribution Functions (BRDFs) from descriptive textual prompts. BRDFs are four dimensional probability distributions that characterize the interaction of light with surface materials. They are either represented parametrically, or by tabulating the probability density associated with every pair of incident and outgoing angles. The former lends itself to artistic editing while the latter is used when measuring the appearance of real materials. Numerous works have focused on hypothesizing BRDF models from images of materials.We learn a mapping from textual descriptions of materials to parametric BRDFs. Our model is first trained using a semi-supervised approach before being tuned via an unsupervised scheme. Although our model is general, in this paper we specifically generate parameters for MDL materials, conditioned on natural language descriptions, within NVIDIA's Omniverse platform. This enables use cases such as real-time text prompts to change materials of objects in 3D environments such as ''dull plastic'' or ''shiny iron''. Since the output of our model is a parametric BRDF, rather than an image of the material, it may be used to render materials using any shape under arbitrarily specified viewing and lighting conditions.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies -> Machine learning; Natural language processing; Computer graphics
dc.subjectComputing methodologies
dc.subjectMachine learning
dc.subjectNatural language processing
dc.subjectComputer graphics
dc.titleGenerating Parametric BRDFs from Natural Language Descriptionsen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersRadiance and Appearance
dc.description.volume42
dc.description.number7
dc.identifier.doi10.1111/cgf.14980
dc.identifier.pages9 pages


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 42-Issue 7
    Pacific Graphics 2023 - Symposium Proceedings

Show simple item record