Show simple item record

dc.contributor.authorLi, Wenhuien_US
dc.contributor.authorLiu, Ananen_US
dc.contributor.authorNie, Weizhien_US
dc.contributor.authorSong, Danen_US
dc.contributor.authorLi, Yuqianen_US
dc.contributor.authorWang, Weijieen_US
dc.contributor.authorXiang, Shuen_US
dc.contributor.authorZhou, Heyuen_US
dc.contributor.authorBui, Ngoc-Minhen_US
dc.contributor.authorCen, Yunchien_US
dc.contributor.authorChen, Zenianen_US
dc.contributor.authorChung-Nguyen, Huy-Hoangen_US
dc.contributor.authorDiep, Gia-Hanen_US
dc.contributor.authorDo, Trong-Leen_US
dc.contributor.authorDoubrovski, Eugeni L.en_US
dc.contributor.authorDuong, Anh-Ducen_US
dc.contributor.authorGeraedts, Jo M. P.en_US
dc.contributor.authorGuo, Haobinen_US
dc.contributor.authorHoang, Trung-Hieuen_US
dc.contributor.authorLi, Yichenen_US
dc.contributor.authorLiu, Xingen_US
dc.contributor.authorLiu, Zishunen_US
dc.contributor.authorLuu, Duc-Tuanen_US
dc.contributor.authorMa, Yunshengen_US
dc.contributor.authorNguyen, Vinh-Tiepen_US
dc.contributor.authorNie, Jieen_US
dc.contributor.authorRen, Tongweien_US
dc.contributor.authorTran, Mai-Khiemen_US
dc.contributor.authorTran-Nguyen, Son-Thanhen_US
dc.contributor.authorTran, Minh-Trieten_US
dc.contributor.authorVu-Le, The-Anhen_US
dc.contributor.authorWang, Charlie C. L.en_US
dc.contributor.authorWang, Shijieen_US
dc.contributor.authorWu, Gangshanen_US
dc.contributor.authorYang, Caifeien_US
dc.contributor.authorYuan, Mengen_US
dc.contributor.authorZhai, Haoen_US
dc.contributor.authorZhang, Aoen_US
dc.contributor.authorZhang, Fanen_US
dc.contributor.authorZhao, Sichengen_US
dc.contributor.editorBiasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remcoen_US
dc.date.accessioned2019-05-04T14:06:05Z
dc.date.available2019-05-04T14:06:05Z
dc.date.issued2019
dc.identifier.isbn978-3-03868-077-2
dc.identifier.issn1997-0471
dc.identifier.urihttps://doi.org/10.2312/3dor.20191068
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/3dor20191068
dc.description.abstractMonocular image based 3D object retrieval is a novel and challenging research topic in the field of 3D object retrieval. Given a RGB image captured in real world, it aims to search for relevant 3D objects from a dataset. To advance this promising research, we organize this SHREC track and build the first monocular image based 3D object retrieval benchmark by collecting 2D images from ImageNet and 3D objects from popular 3D datasets such as NTU, PSB, ModelNet40 and ShapeNet. The benchmark contains classified 21,000 2D images and 7,690 3D objects of 21 categories. This track attracted 9 groups from 4 countries and the submission of 20 runs. To have a comprehensive comparison, 7 commonly-used retrieval performance metrics have been used to evaluate their retrieval performance. The evaluation results show that the supervised cross domain learning get the superior retrieval performance (Best NN is 97.4 %) by bridging the domain gap with label information. However, there is still a big challenge for unsupervised cross domain learning (Best NN is 61.2%), which is more practical for the real application. Although we provided both view images and OBJ file for each 3D model, all the participants use the view images to represent the 3D model. One of the interesting work in the future is directly using the 3D information and 2D RGB information to solve the task of monocular Image based 3D model retrieval.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectH.3.3 [Computer Graphics]
dc.subjectInformation Systems
dc.subjectInformation Search and Retrieval
dc.titleMonocular Image Based 3D Model Retrievalen_US
dc.description.seriesinformationEurographics Workshop on 3D Object Retrieval
dc.description.sectionheadersSHREC Session 2
dc.identifier.doi10.2312/3dor.20191068
dc.identifier.pages103-110


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record