Optimal Spatial Registration of SLAM for Augmented Reality
Date
2019-03-15Author
Wientapper, Folker
Item/paper (currently) not available via TIB Hannover.
Metadata
Show full item recordAbstract
Augmented reality (AR) is a paradigm that aims at fusing the perceived real environment of a human with digital information located in 3D space. Typically, virtual 3D graphics are overlayed into the captured images of a moving camera or directly into the user's field-of-view by means of optical see-through displays (OST). For a correct perspective and view-dependent alignment of the visualization, it is required to solve various static and dynamic geometric registration problems in order to create the impression that the virtual and the real world are seamlessly interconnected.
The advances during the last decade in the field of simultaneous localization and mapping (SLAM) represent an important contribution to this general problem. It is now possible to reconstruct the real environment and to simultaneously capture the dynamic movements of a camera from the images without having to instrument the environment in advance. However, SLAM in general can only partly solve the entire registration problem, because the retrieved 3D scene geometry and the calculated motion path are spatially related only with regard to an arbitrarily selected coordinate system. Without a proper reconciliation of coordinate systems (spatial registration), the real world of the human observer still remains decoupled from the virtual world. Existing approaches for solving this problem either require the availability of a virtual 3D model that represents a real object with sufficient accuracy (model-based tracking), or they rely on use-case specific assumptions and additional sensor data (such as GPS signals or the Manhattan-world assumption). Therefore, these approaches are bound to these additional prerequisites, which limit the general applicability. The circumstance that automated registration is desirable but not always possible, creates the need for techniques that allow a user to specify connections between the real and the virtual world when setting up AR applications, so that it becomes possible to support and control the process of registration. These techniques must be complemented with numerical algorithms that optimally exploit the provided information to obtain precise registration results.
Within this context, the present thesis provides the following contributions.
* We propose a novel, closed-form (non-iterative) algorithm for calculating a Euclidean or a similarity transformation. The presented algorithm is a generalization of recent state-of-the-art solvers for computing the camera pose based on 2D measurement points in the image (perspective-n-point problem) - a fundamental problem in computer vision that has attracted research for many decades. The generalization consists in extending and unifying these algorithms, so that they can handle other types of input correspondences than originally designed for. With this algorithm, it becomes possible to perform a rigid registration of SLAM systems to a target coordinate system based on heterogeneous and partially indeterminate input data.
* We address the global refinement of structure and motion parameters by means of iterative sparse minimization (bundle adjustment or BA), which has become a standard technique inside SLAM systems. We propose a variant of BA in which information about the virtual domain is integrated as constraints by means of an optimization-on-manifold approach. This serves for compensating low-frequency deformations (non-rigid registration) of the estimated camera path and the reconstructed scene geometry caused by measurement error accumulation and the ill-conditionedness of the BA problem.
* We present two approaches in which a user can contribute with his knowledge for registering a SLAM system. In a first variant, the user can place markers in the real environment with predefined connections to the virtual coordinate system. Precise positioning of the markers is not required, rather they can be placed arbitrarily on surfaces or along edges, which notably facilitates the preparative effort. During run-time, the dispersed information is collected and registration is accomplished automatically. In a second variant, the user is given the possibility to mark salient points in an image sequence during a preparative preprocessing step and to assign corresponding points in the virtual 3D space via a simple point-and-click metaphor. The result of this preparative phase is a precisely registered and ready-to-use reference model for camera tracking at run-time.
* Finally, we propose an approach for geometric calibration of optical see-trough displays. We present a parametric model, which allows to dynamically adapt the rendering of virtual 3D content to the current viewpoint of the human observer, including a pre-correction of image aberrations caused by the optics or irregularly curved combiners. In order to retrieve its parameters, we propose a camera-based approach, in which elements of the real and the virtual domain are simultaneously observed. The calibration procedure was developed for a head-up display in a vehicle. A prototypical extension to head-mounted displays is also presented.