dc.contributor.author | Pham, Quang-Hieu | en_US |
dc.contributor.author | Tran, Minh-Khoi | en_US |
dc.contributor.author | Li, Wenhui | en_US |
dc.contributor.author | Xiang, Shu | en_US |
dc.contributor.author | Zhou, Heyu | en_US |
dc.contributor.author | Nie, Weizhi | en_US |
dc.contributor.author | Liu, Anan | en_US |
dc.contributor.author | Su, Yuting | en_US |
dc.contributor.author | Tran, Minh-Triet | en_US |
dc.contributor.author | Bui, Ngoc-Minh | en_US |
dc.contributor.author | Do, Trong-Le | en_US |
dc.contributor.author | Ninh, Tu V. | en_US |
dc.contributor.author | Le, Tu-Khiem | en_US |
dc.contributor.author | Dao, Anh-Vu | en_US |
dc.contributor.author | Nguyen, Vinh-Tiep | en_US |
dc.contributor.author | Do, Minh N. | en_US |
dc.contributor.author | Duong, Anh-Duc | en_US |
dc.contributor.author | Hua, Binh-Son | en_US |
dc.contributor.author | Yu, Lap-Fai | en_US |
dc.contributor.author | Nguyen, Duc Thanh | en_US |
dc.contributor.author | Yeung, Sai-Kit | en_US |
dc.contributor.editor | Telea, Alex and Theoharis, Theoharis and Veltkamp, Remco | en_US |
dc.date.accessioned | 2018-04-14T18:28:40Z | |
dc.date.available | 2018-04-14T18:28:40Z | |
dc.date.issued | 2018 | |
dc.identifier.isbn | 978-3-03868-053-6 | |
dc.identifier.issn | 1997-0471 | |
dc.identifier.uri | http://dx.doi.org/10.2312/3dor.20181052 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/3dor20181052 | |
dc.description.abstract | Recent advances in consumer-grade depth sensors have enable the collection of massive real-world 3D objects. Together with the rise of deep learning, it brings great potential for large-scale 3D object retrieval. In this challenge, we aim to study and evaluate the performance of 3D object retrieval algorithms with RGB-D data. To support the study, we expanded the previous ObjectNN dataset [HTT 17] to include RGB-D objects from both SceneNN [HPN 16] and ScanNet [DCS 17], with the CAD models from ShapeNetSem [CFG 15]. Evaluation results show that while the RGB-D to CAD retrieval problem is indeed challenging due to incomplete RGB-D reconstructions, it can be addressed to a certain extent using deep learning techniques trained on multi-view 2D images or 3D point clouds. The best method in this track has a 82% retrieval accuracy. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | RGB-D Object-to-CAD Retrieval | en_US |
dc.description.seriesinformation | Eurographics Workshop on 3D Object Retrieval | |
dc.description.sectionheaders | SHREC Tracks | |
dc.identifier.doi | 10.2312/3dor.20181052 | |
dc.identifier.pages | 45-52 | |