dc.contributor.author | Jagadeesan, A. P. | en_US |
dc.contributor.author | Wenzel, J. | en_US |
dc.contributor.author | Corney, Jonathan R. | en_US |
dc.contributor.author | Yan, X. | en_US |
dc.contributor.author | Sherlock, A. | en_US |
dc.contributor.author | Torres-Sanchez, C. | en_US |
dc.contributor.author | Regli, William | en_US |
dc.contributor.editor | Mohamed Daoudi and Tobias Schreck | en_US |
dc.date.accessioned | 2013-10-21T16:10:02Z | |
dc.date.available | 2013-10-21T16:10:02Z | |
dc.date.issued | 2010 | en_US |
dc.identifier.isbn | 978-3-905674-22-4 | en_US |
dc.identifier.issn | 1997-0471 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/3DOR/3DOR10/055-062 | en_US |
dc.description.abstract | Although a significant number of benchmark data sets for 3D object based retrieval systems have been proposed over the last decade their value is dependent on a robust classification of their content being available. Ideally researchers would want hundreds of people to have classified thousands of parts and the results recorded in a manner that explicitly shows how the similarity assessments varies with the precision used to make the judgement. This paper reports a study which investigated the proposition that Internet Crowdsourcing could be used to quickly and cheaply provide benchmark classifications of 3D shapes. The collective judgments of the anonymous workers produce a classification that has surprisingly fine granularity and precision. The paper reports the results of validating Crowdsourced judgements of 3D similarity against Purdue's ESB and concludes with an estimate of the overall costs associated with large scale classification tasks involving many tens of thousands of models. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Fast Human Classification of 3D Object Benchmarks | en_US |
dc.description.seriesinformation | Eurographics Workshop on 3D Object Retrieval | en_US |