Show simple item record

dc.contributor.authorDias, Joséen_US
dc.contributor.authorNande, Pedroen_US
dc.contributor.authorSantos, Pedroen_US
dc.contributor.authorBarata, Nunoen_US
dc.contributor.authorCorreia, Andréen_US
dc.contributor.editorMarcos, Adérito and Mendonça, Ana and Leitão, Miguel and Costa, António and Jorge, Joaquimen_US
dc.date.accessioned2021-10-14T11:18:31Z
dc.date.available2021-10-14T11:18:31Z
dc.date.issued2021
dc.identifier.isbn978-3-03868-163-2
dc.identifier.urihttps://doi.org/10.2312/pt.20031431
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/pt20031431
dc.description.abstractIn this work we present a novel free-hand gesture user interface based on detecting the trajectory of fiducial markers attached to the user's fingers and pulse, able to interact a sequence of images of a digital video piece. The model adopted for the video representation, is based in its decomposition in a sequence of frames, or filmstrip. Totally sensor-less and cable-less interfaces, provide the means for a user to intuitively interact through gestures with the filmstrip, within the framework of an Augmented Virtuality usage scenario. By simply gesturing, users are able to select at random, drag, release, delete or zoom image frames, browse the filmstrip at a controlled user-defined rate and issue start, end, stop and play commands to better control the digital video sequence. A fixed video camera monitors the user interaction through gesturing of the mentioned fiducial markers. This scheme enables the system to simplify the more complex problem of marker-less free-hand gesture tracking. Once the markers are detected and recognized in real- time by the computer vision layer, the system obtains the 3D pose (position and orientation) of the marker centres in relation to a virtual camera reference frame, whose mathematical model matches the real video camera. We are specifically interested in obtaining the pose of the left and right hand pulses, left and right thumb, and left and right hand index. By projecting the positions of these poses in the 2D visualization window, simple topological analysis based in the study of the kinematics evolution of distances and angles, can be implemented, enabling gesture recognition and the activation of system functions and, subsequently, of specific gesture-based user interaction for a given active functionality. This interaction will affect the shape, scale factor, position and visualisation of scene objects, that is, of filmstrip frames. For the computer vision layer, our system adopts AR Toolkit, a C/Open GL-based open source library that uses accurate vision based tracking methods to determine the virtual camera pose information through the detection in real-time of fiducial markers. The graphical output is implemented with C++/Open GL. Our proposed system is general, in the sense that it can interact with any filmstrip, obtained ''a priori'' from a digital video source.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectAugmented Virtuality
dc.subjectTangible Interfaces
dc.subjectAR Toolkit
dc.subjectHand and Finger Gesture
dc.subjectDigital Video Editing
dc.subjectImage Browsing
dc.subjectFilmstrip
dc.titleImage Manipulation through Gesturesen_US
dc.description.seriesinformation12º Encontro Português de Computação Gráfica
dc.description.sectionheadersRealidade Aumentada
dc.identifier.doi10.2312/pt.20031431
dc.identifier.pages111-118


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record