dc.description.abstract | Thanks to nowadays technologies, innovative tools afford to increase our knowledge of historic monuments, in the field of preservation and valuation of cultural heritage. These tools are aimed to help experts to create, enrich and share information on historical buildings. Among the various documentary sources, photographs contain a high level of details about shapes and colors. With the development of image analysis and image-based-modeling techniques, large sets of images can be spatially oriented towards a digital mock-up. For these reasons, digital photographs prove to be an easy to use, affordable and flexible support, for heritage documentation. This article presents, in a first step, an approach for 2D/3D semantic annotations in a set of spatially-oriented photographs (whose positions and orientations in space are automatically estimated). In a second step, we will focus on a method for displaying those annotations on new images acquired by mobile devices in situ. Firstly, an automated image-based reconstruction method produces 3D information (specifically 3D coordinates) by processing a large images set. Then, images are semantically annotated and a process uses the previously generated 3D information inherent to images for the annotations transfer. As a consequence, this protocol provides a simple way to finely annotate a large quantity of images at once instead of one by one. As those images annotations are directly inherent to 3D information, they can be stored as 3D files. To bring up on screen the information related to a building, the user takes a picture in situ. An image processing method allows estimating the orientation parameters of this new photograph inside the already oriented large images base. Then the annotations can be precisely projected on the oriented picture and send back to the user. In this way a continuity of information could be established from the initial acquisition to the in situ visualization. | en_US |