Show simple item record

dc.contributor.authorQiao, Zhien_US
dc.contributor.authorKanai, Takashien_US
dc.contributor.editorLee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, Burkharden_US
dc.date.accessioned2020-10-29T18:39:32Z
dc.date.available2020-10-29T18:39:32Z
dc.date.issued2020
dc.identifier.isbn978-3-03868-120-5
dc.identifier.urihttps://doi.org/10.2312/pg.20201222
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/pg20201222
dc.description.abstractWe present a novel approach for shading photorealistic hair animation, which is the essential visual element for depicting realistic hairs of virtual characters. Our model is able to shade high-quality hairs quickly by extending the conditional Generative Adversarial Networks. Furthermore, our method is much faster than the previous onerous rendering algorithms and produces fewer artifacts than other neural image translation methods. In this work, we provide a novel energy-conserving hair shading model, which retains the vast majority of semi-transparent appearances and exactly produces the interaction with lights of the scene. Our method is effortless to implement, faster and computationally more efficient than previous algorithms.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectImage based rendering
dc.subjectNeural networks
dc.titleAn Energy-Conserving Hair Shading Model Based on Neural Style Transferen_US
dc.description.seriesinformationPacific Graphics Short Papers, Posters, and Work-in-Progress Papers
dc.description.sectionheadersRendering
dc.identifier.doi10.2312/pg.20201222
dc.identifier.pages1-6


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record