Show simple item record

dc.contributor.authorLópez Ruiz, Alfonso
dc.date.accessioned2023-10-05T06:43:07Z
dc.date.available2023-10-05T06:43:07Z
dc.date.issued2023-06-30
dc.identifier.citationNew tools for modelling sensor data; University de Jaén; López Ruiz, Alfonso; 2023en_US
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/3543884
dc.description.abstractThe objective of this thesis is to develop a framework capable of handling multiple data sources by correcting and fusing them to monitor, predict, and optimize real-world processes. The scope is not limited to images but also covers the reconstruction of 3D point clouds integrating visible, multispectral, thermal and hyperspectral data. However, working with real-world data is also tedious as it involves multiple steps that must be performed manually, such as collecting data, marking control points or annotating points. Instead, an alternative is to generate synthetic data from realistic scenarios, hence avoiding the acquisition of prohibitive technology and efficiently constructing large datasets. In addition, models in virtual scenarios can be attached to semantic annotations and materials, among other properties. Unlike manual annotations, synthetic datasets do not introduce spurious information that could mislead the algorithms that will use them. Remotely sensed images, albeit showing notable radiometric changes, can be fused by optimizing the correlation among them. This thesis exploits the Enhanced Correlation Coefficient image-matching algorithm to overlap visible, multispectral and thermal data. Then, multispectral and thermal data are projected into a dense RGB point cloud reconstructed with photogrammetry. By projecting and not directly reconstructing, the aim is to achieve geometrically accurate and dense point clouds from low-resolution imagery. In addition, this methodology is notably more efficient than GPU-based photogrammetry in commercial software. Radiometric data is ensured to be correct by identifying the occlusion of points as well as by minimizing the dissimilarity of aggregated data from the starting samples. Hyperspectral data is, on the other hand, projected over 2.5D point clouds with a pipeline adapted to push-broom scanning. The hyperspectral swaths are geometrically corrected and overlapped to compose an orthomosaic. Then, it is projected over a voxelized point cloud. Due to the large volume of the resulting hypercube, it is compressed following a stack-based representation in the radiometric dimension. The real-time rendering of the compressed hypercube is enabled by iteratively constructing an image in a few frames, thus reducing the overhead of single frames. In contrast, the generation of synthetic data is focused on LiDAR technology. The baseline of this simulation is the indexing of scenarios with a high level of detail in state-of-the-art ray-tracing data structures that help to rapidly solve ray-triangle intersections. From here, random and systematic errors are introduced, such as outliers, jittering of rays and return losses, among others. In addition, the construction of large LiDAR datasets is supported by the procedural generation of scenes that can be enriched with semantic annotations and materials. Airborne and terrestrial scans are parameterized to be fed with datasheets from commercial sensors. The airborne scans integrate several scan geometries, whereas the intensity of returns is estimated with BRDF databases that collect samples from a gonio-photometer. In addition, the simulated LiDAR can operate at different wavelengths, including bathymetry, and emulates several returns. This thesis is concluded by showing the benefits of fused data and synthetic datasets with three case studies. The LiDAR simulation is employed to optimize scanning plans in buildings by using local searches to determine optimal scan locations while minimizing the number of required scans with the help of genetic algorithms. These metaheuristics are guided by four objective functions that evaluate the accuracy, coverage, detail, and overlapping of the LiDAR scans. Then, thermal infrared point clouds and orthorectified maps are used to locate buried remains and reconstruct the structure of a poorly conserved archaeological site, highlighting the potential of remotely sensed data to support the preservation of cultural heritage. Finally, hyperspectral data is corrected and transformed to train a convolutional neural network in pursuit of classifying different grapevine varieties.en_US
dc.description.sponsorshipSpanish Ministry of Science and Innovation (FPU19/00100)en_US
dc.language.isoenen_US
dc.subjectPoint clouden_US
dc.subjectGPU-based computingen_US
dc.subjectHyperspectralen_US
dc.subjectThermographyen_US
dc.subjectLiDAR simulationen_US
dc.subjectClassificationen_US
dc.titleNew tools for modelling sensor dataen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record