Show simple item record

dc.contributor.authorShtof, Alexen_US
dc.contributor.authorAgathos, Alexanderen_US
dc.contributor.authorGingold, Yotamen_US
dc.contributor.authorShamir, Arielen_US
dc.contributor.authorCohen-Or, Danielen_US
dc.contributor.editorI. Navazo, P. Poulinen_US
dc.date.accessioned2015-02-28T15:22:38Z
dc.date.available2015-02-28T15:22:38Z
dc.date.issued2013en_US
dc.identifier.issn1467-8659en_US
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.12044en_US
dc.description.abstractModeling 3D objects from sketches is a process that requires several challenging problems including segmentation, recognition and reconstruction. Some of these tasks are harder for humans and some are harder for the machine. At the core of the problem lies the need for semantic understanding of the shape's geometry from the sketch. In this paper we propose a method to model 3D objects from sketches by utilizing humans specifically for semantic tasks that are very simple for humans and extremely difficult for the machine, while utilizing the machine for tasks that are harder for humans. The user assists recognition and segmentation by choosing and placing specific geometric primitives on the relevant parts of the sketch. The machine first snaps the primitive to the sketch by fitting its projection to the sketch lines, and then improves the model globally by inferring geosemantic constraints that link the different parts. The fitting occurs in real-time, allowing the user to be only as precise as needed to have a good starting configuration for this non-convex optimization problem. We evaluate the accessibility of our approach with a user study.en_US
dc.publisherThe Eurographics Association and Blackwell Publishing Ltd.en_US
dc.titleGeosemantic Snapping for Sketch-Based Modelingen_US
dc.description.seriesinformationComputer Graphics Forumen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record