dc.contributor.author | Xu, Pei | en_US |
dc.contributor.author | Karamouzas, Ioannis | en_US |
dc.contributor.editor | Narain, Rahul and Neff, Michael and Zordan, Victor | en_US |
dc.date.accessioned | 2022-02-07T13:32:37Z | |
dc.date.available | 2022-02-07T13:32:37Z | |
dc.date.issued | 2021 | |
dc.identifier.issn | 2577-6193 | |
dc.identifier.uri | https://doi.org/10.1145/3480148 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1145/3480148 | |
dc.description.abstract | We present a simple and intuitive approach for interactive control of physically simulated characters. Our work builds upon generative adversarial networks (GAN) and reinforcement learning, and introduces an imitation learning framework where an ensemble of classifiers and an imitation policy are trained in tandem given pre-processed reference clips. The classifiers are trained to discriminate the reference motion from the motion generated by the imitation policy, while the policy is rewarded for fooling the discriminators. Using our GAN-like approach, multiple motor control policies can be trained separately to imitate different behaviors. In runtime, our system can respond to external control signal provided by the user and interactively switch between different policies. Compared to existing method, our proposed approach has the following attractive properties: 1) achieves state-of-the-art imitation performance without manually designing and fine tuning a reward function; 2) directly controls the character without having to track any target reference pose explicitly or implicitly through a phase state; and 3) supports interactive policy switching without requiring any motion generation or motion matching mechanism. We highlight the applicability of our approach in a range of imitation and interactive control tasks, while also demonstrating its ability to withstand external perturbations as well as to recover balance. Overall, our approach has low runtime cost and can be easily integrated into interactive applications and games. | en_US |
dc.publisher | ACM | en_US |
dc.subject | Computing methodologies | |
dc.subject | Animation | |
dc.subject | Physical simulation | |
dc.subject | Reinforcement learning | |
dc.subject | character animation | |
dc.subject | physics | |
dc.subject | based control | |
dc.subject | reinforcement learning | |
dc.subject | GAN | |
dc.title | A GAN-Like Approach for Physics-Based Imitation Learning and Interactive Control | en_US |
dc.description.seriesinformation | Proceedings of the ACM on Computer Graphics and Interactive Techniques | |
dc.description.sectionheaders | papers | |
dc.description.volume | 4 | |
dc.description.number | 3 | |
dc.identifier.doi | 10.1145/3480148 | |