Learning Locomotion Skills Using DeepRL: Does the Choice of Action Space Matter?
Abstract
The use of deep reinforcement learning allows for high-dimensional state descriptors, but little is known about how the choice of action representation impacts learning and the resulting performance. We compare the impact of four di erent action parameterizations (torques, muscle-activations, target joint angles, and target jointangle velocities) in terms of learning time, policy robustness, motion quality, and policy query rates. Our results are evaluated on a gaitcycle imitation task for multiple planar articulated figures and multiple gaits. We demonstrate that the local feedback provided by higher-level action parameterizations can signi cantly impact the learning, robustness, and motion quality of the resulting policies.
BibTeX
@inproceedings {10.1145:3099564.3099567,
booktitle = {Eurographics/ ACM SIGGRAPH Symposium on Computer Animation},
editor = {Bernhard Thomaszewski and KangKang Yin and Rahul Narain},
title = {{Learning Locomotion Skills Using DeepRL: Does the Choice of Action Space Matter?}},
author = {Peng, Xue Bin and Panne, Michiel van de},
year = {2017},
publisher = {ACM},
ISSN = {1727-5288},
ISBN = {978-1-4503-5091-4},
DOI = {10.1145/3099564.3099567}
}
booktitle = {Eurographics/ ACM SIGGRAPH Symposium on Computer Animation},
editor = {Bernhard Thomaszewski and KangKang Yin and Rahul Narain},
title = {{Learning Locomotion Skills Using DeepRL: Does the Choice of Action Space Matter?}},
author = {Peng, Xue Bin and Panne, Michiel van de},
year = {2017},
publisher = {ACM},
ISSN = {1727-5288},
ISBN = {978-1-4503-5091-4},
DOI = {10.1145/3099564.3099567}
}