Show simple item record

dc.contributor.authorJagadeesan, A. P.en_US
dc.contributor.authorWenzel, J.en_US
dc.contributor.authorCorney, Jonathan R.en_US
dc.contributor.authorYan, X.en_US
dc.contributor.authorSherlock, A.en_US
dc.contributor.authorTorres-Sanchez, C.en_US
dc.contributor.authorRegli, Williamen_US
dc.contributor.editorMohamed Daoudi and Tobias Schrecken_US
dc.date.accessioned2013-10-21T16:10:02Z
dc.date.available2013-10-21T16:10:02Z
dc.date.issued2010en_US
dc.identifier.isbn978-3-905674-22-4en_US
dc.identifier.issn1997-0471en_US
dc.identifier.urihttp://dx.doi.org/10.2312/3DOR/3DOR10/055-062en_US
dc.description.abstractAlthough a significant number of benchmark data sets for 3D object based retrieval systems have been proposed over the last decade their value is dependent on a robust classification of their content being available. Ideally researchers would want hundreds of people to have classified thousands of parts and the results recorded in a manner that explicitly shows how the similarity assessments varies with the precision used to make the judgement. This paper reports a study which investigated the proposition that Internet Crowdsourcing could be used to quickly and cheaply provide benchmark classifications of 3D shapes. The collective judgments of the anonymous workers produce a classification that has surprisingly fine granularity and precision. The paper reports the results of validating Crowdsourced judgements of 3D similarity against Purdue's ESB and concludes with an estimate of the overall costs associated with large scale classification tasks involving many tens of thousands of models.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleFast Human Classification of 3D Object Benchmarksen_US
dc.description.seriesinformationEurographics Workshop on 3D Object Retrievalen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record