dc.contributor.author | Chen, Luanmin | en_US |
dc.contributor.author | Xu, Juzhan | en_US |
dc.contributor.author | Wang, Chuan | en_US |
dc.contributor.author | Huang, Haibin | en_US |
dc.contributor.author | Huang, Hui | en_US |
dc.contributor.author | Hu, Ruizhen | en_US |
dc.contributor.editor | Zhang, Fang-Lue and Eisemann, Elmar and Singh, Karan | en_US |
dc.date.accessioned | 2021-10-14T11:12:24Z | |
dc.date.available | 2021-10-14T11:12:24Z | |
dc.date.issued | 2021 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14419 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14419 | |
dc.description.abstract | In this paper, we study the problem of 3D shape upright orientation estimation from the perspective of reinforcement learning, i.e. we teach a machine (agent) to orientate 3D shapes step by step to upright given its current observation. Unlike previous methods, we take this problem as a sequential decision-making process instead of a strong supervised learning problem. To achieve this, we propose UprightRL, a deep network architecture designed for upright orientation estimation. UprightRL mainly consists of two submodules: an Actor module and a Critic module which can be learned with a reinforcement learning manner. Specifically, the Actor module selects an action from the action space to perform a point cloud transformation and obtain the new point cloud for the next environment state, while the Critic module evaluates the strategy and guides the Actor to choose the next stage action. Moreover, we design a reward function that encourages the agent to select action which is conducive to orient model towards upright orientation with a positive reward and negative otherwise. We conducted extensive experiments to demonstrate the effectiveness of the proposed model, and experimental results show that our network outperforms the stateof- the-art. We also apply our method to the robot grasping-and-placing experiment, to reveal the practicability of our method. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | Computing methodologies | |
dc.subject | Reinforcement learning | |
dc.subject | Sequential decision | |
dc.subject | making process | |
dc.subject | Upright orientation | |
dc.title | UprightRL: Upright Orientation Estimation of 3D Shapes via Reinforcement Learning | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Modelling | |
dc.description.volume | 40 | |
dc.description.number | 7 | |
dc.identifier.doi | 10.1111/cgf.14419 | |
dc.identifier.pages | 265-275 | |