dc.contributor.author | Scherfgen, David | en_US |
dc.contributor.author | Schild, Jonas | en_US |
dc.contributor.editor | Maiero, Jens and Weier, Martin and Zielasko, Daniel | en_US |
dc.date.accessioned | 2021-09-07T13:40:07Z | |
dc.date.available | 2021-09-07T13:40:07Z | |
dc.date.issued | 2021 | |
dc.identifier.isbn | 978-3-03868-159-5 | |
dc.identifier.issn | 1727-530X | |
dc.identifier.uri | https://doi.org/10.2312/egve.20211334 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egve20211334 | |
dc.description.abstract | Virtual medical emergency training provides complex while safe interactions with virtual patients. Haptically integrating a medical manikin into virtual training has the potential to improve the interaction with a virtual patient and the training experience. We present a system that estimates the 3D pose of a medical manikin in order to haptically augment a human model in a virtual reality training environment, allowing users to physically touch a virtual patient. The system uses an existing convolutional neural network-based (CNN) body keypoint detector to locate relevant 2D keypoints of the manikin in the images of the stereo camera built into a head-mounted display. The manikin's position, orientation and joint angles are found by non-linear optimization. A preliminary analysis reports an error of 4.3 cm. The system is not yet capable of real-time processing. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Human centered computing | |
dc.subject | Mixed / augmented reality | |
dc.subject | Virtual reality | |
dc.title | Estimating the Pose of a Medical Manikin for Haptic Augmentation of a Virtual Patient in Mixed Reality Training | en_US |
dc.description.seriesinformation | ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos | |
dc.description.sectionheaders | Posters | |
dc.identifier.doi | 10.2312/egve.20211334 | |
dc.identifier.pages | 3-4 | |