Show simple item record

dc.contributor.authorKips, Robinen_US
dc.contributor.authorJiang, Ruoweien_US
dc.contributor.authorBa, Sileyeen_US
dc.contributor.authorDuke, Brendanen_US
dc.contributor.authorPerrot, Matthieuen_US
dc.contributor.authorGori, Pietroen_US
dc.contributor.authorBloch, Isabelleen_US
dc.contributor.editorChaine, Raphaëlleen_US
dc.contributor.editorKim, Min H.en_US
dc.date.accessioned2022-04-22T06:26:21Z
dc.date.available2022-04-22T06:26:21Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14456
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14456
dc.description.abstractAugmented reality applications have rapidly spread across online retail platforms and social media, allowing consumers to virtually try-on a large variety of products, such as makeup, hair dying, or shoes. However, parametrizing a renderer to synthesize realistic images of a given product remains a challenging task that requires expert knowledge. While recent work has introduced neural rendering methods for virtual try-on from example images, current approaches are based on large generative models that cannot be used in real-time on mobile devices. This calls for a hybrid method that combines the advantages of computer graphics and neural rendering approaches. In this paper, we propose a novel framework based on deep learning to build a real-time inverse graphics encoder that learns to map a single example image into the parameter space of a given augmented reality rendering engine. Our method leverages self-supervised learning and does not require labeled training data, which makes it extendable to many virtual try-on applications. Furthermore, most augmented reality renderers are not differentiable in practice due to algorithmic choices or implementation constraints to reach real-time on portable devices. To relax the need for a graphics-based differentiable renderer in inverse graphics problems, we introduce a trainable imitator module. Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable renderer. We propose a novel rendering sensitivity loss to train the imitator, which ensures that the network learns an accurate and continuous representation for each rendering parameter. Automatically learning a differentiable renderer, as proposed here, could be beneficial for various inverse graphics tasks. Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image on social media. It can also be used by computer graphics artists to automatically create realistic rendering from a reference product image.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies --> Computer vision; Machine learning; Computer graphics
dc.subjectComputing methodologies
dc.subjectComputer vision
dc.subjectMachine learning
dc.subjectComputer graphics
dc.titleReal-time Virtual-Try-On from a Single Example Image through Deep Inverse Graphics and Learned Differentiable Renderersen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersLearning for Rendering
dc.description.volume41
dc.description.number2
dc.identifier.doi10.1111/cgf.14456
dc.identifier.pages29-40
dc.identifier.pages12 pages


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record