dc.contributor.author | Jia, Biao | en_US |
dc.contributor.author | Brandt, Jonathan | en_US |
dc.contributor.author | Mech, Radomír | en_US |
dc.contributor.author | Kim, Byungmoon | en_US |
dc.contributor.author | Manocha, Dinesh | en_US |
dc.contributor.editor | Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon | en_US |
dc.date.accessioned | 2019-10-14T05:12:44Z | |
dc.date.available | 2019-10-14T05:12:44Z | |
dc.date.issued | 2019 | |
dc.identifier.isbn | 978-3-03868-099-4 | |
dc.identifier.issn | - | |
dc.identifier.uri | https://doi.org/10.2312/pg.20191336 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/pg20191336 | |
dc.description.abstract | We present a novel reinforcement learning-based natural media painting algorithm. Our goal is to reproduce a reference image using brush strokes and we encode the objective through observations. Our formulation takes into account that the distribution of the reward in the action space is sparse and training a reinforcement learning algorithm from scratch can be difficult. We present an approach that combines self-supervised learning and reinforcement learning to effectively transfer negative samples into positive ones and change the reward distribution. We demonstrate the benefits of our painting agent to reproduce reference images with brush strokes. The training phase takes about one hour and the runtime algorithm takes about 30 seconds on a GTX1080 GPU reproducing a 1000x800 image with 20,000 strokes. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | LPaintB: Learning to Paint from Self-Supervision | en_US |
dc.description.seriesinformation | Pacific Graphics Short Papers | |
dc.description.sectionheaders | Images and Learning | |
dc.identifier.doi | 10.2312/pg.20191336 | |
dc.identifier.pages | 33-39 | |