This page has only limited features, please log in for full access.

Unclaimed
George Vosselman
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede, The Netherlands

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 28 June 2021 in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Reads 0
Downloads 0

Semantic segmentation models are often affected by illumination changes, and fail to predict correct labels. Although there has been a lot of research on indoor semantic segmentation, it has not been studied in low-light environments. In this paper we propose a new framework, LISU, for Low-light Indoor Scene Understanding. We first decompose the low-light images into reflectance and illumination components, and then jointly learn reflectance restoration and semantic segmentation. To train and evaluate the proposed framework, we propose a new data set, namely LLRGBD, which consists of a large synthetic low-light indoor data set (LLRGBD-synthetic) and a small real data set (LLRGBD-real). The experimental results show that the illumination-invariant features effectively improve the performance of semantic segmentation. Compared with the baseline model, the mIoU of the proposed LISU framework has increased by 11.5%. In addition, pre-training on our synthetic data set increases the mIoU by 7.2%. Our data sets and models are available on our project website.

ACS Style

N. Zhang; F. Nex; N. Kerle; G. Vosselman. TOWARDS LEARNING LOW-LIGHT INDOOR SEMANTIC SEGMENTATION WITH ILLUMINATION-INVARIANT FEATURES. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2021, XLIII-B2-2, 427 -432.

AMA Style

N. Zhang, F. Nex, N. Kerle, G. Vosselman. TOWARDS LEARNING LOW-LIGHT INDOOR SEMANTIC SEGMENTATION WITH ILLUMINATION-INVARIANT FEATURES. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2021; XLIII-B2-2 ():427-432.

Chicago/Turabian Style

N. Zhang; F. Nex; N. Kerle; G. Vosselman. 2021. "TOWARDS LEARNING LOW-LIGHT INDOOR SEMANTIC SEGMENTATION WITH ILLUMINATION-INVARIANT FEATURES." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2, no. : 427-432.

Journal article
Published: 17 June 2021 in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Reads 0
Downloads 0

Semantic segmentation for aerial platforms has been one of the fundamental scene understanding task for the earth observation. Most of the semantic segmentation research focused on scenes captured in nadir view, in which objects have relatively smaller scale variation compared with scenes captured in oblique view. The huge scale variation of objects in oblique images limits the performance of deep neural networks (DNN) that process images in a single scale fashion. In order to tackle the scale variation issue, in this paper, we propose the novel bidirectional multi-scale attention networks, which fuse features from multiple scales bidirectionally for more adaptive and effective feature extraction. The experiments are conducted on the UAVid2020 dataset and have shown the effectiveness of our method. Our model achieved the state-of-the-art (SOTA) result with a mean intersection over union (mIoU) score of 70.80%.

ACS Style

Y. Lyu; G. Vosselman; G.-S. Xia; M. Y. Yang. BIDIRECTIONAL MULTI-SCALE ATTENTION NETWORKS FOR SEMANTIC SEGMENTATION OF OBLIQUE UAV IMAGERY. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2021, V-2-2021, 75 -82.

AMA Style

Y. Lyu, G. Vosselman, G.-S. Xia, M. Y. Yang. BIDIRECTIONAL MULTI-SCALE ATTENTION NETWORKS FOR SEMANTIC SEGMENTATION OF OBLIQUE UAV IMAGERY. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2021; V-2-2021 ():75-82.

Chicago/Turabian Style

Y. Lyu; G. Vosselman; G.-S. Xia; M. Y. Yang. 2021. "BIDIRECTIONAL MULTI-SCALE ATTENTION NETWORKS FOR SEMANTIC SEGMENTATION OF OBLIQUE UAV IMAGERY." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2021, no. : 75-82.

Journal article
Published: 30 April 2021 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0

Interpretation of Airborne Laser Scanning (ALS) point clouds is a critical procedure for producing various geo-information products like 3D city models, digital terrain models and land use maps. In this paper, we present a local and global encoder network (LGENet) for semantic segmentation of ALS point clouds. Adapting the KPConv network, we first extract features by both 2D and 3D point convolutions to allow the network to learn more representative local geometry. Then global encoders are used in the network to exploit contextual information at the object and point level. We design a segment-based Edge Conditioned Convolution to encode the global context between segments. We apply a spatial-channel attention module at the end of the network, which not only captures the global interdependencies between points but also models interactions between channels. We evaluate our method on two ALS datasets namely, the ISPRS benchmark dataset and DCF2019 dataset. For the ISPRS benchmark dataset, our model achieves state-of-the-art results with an overall accuracy of 0.845 and an average F1 score of 0.737. With regards to the DFC2019 dataset, our proposed network achieves an overall accuracy of 0.984 and an average F1 score of 0.834.

ACS Style

Yaping Lin; George Vosselman; Yanpeng Cao; Michael Ying Yang. Local and global encoder network for semantic segmentation of Airborne laser scanning point clouds. ISPRS Journal of Photogrammetry and Remote Sensing 2021, 176, 151 -168.

AMA Style

Yaping Lin, George Vosselman, Yanpeng Cao, Michael Ying Yang. Local and global encoder network for semantic segmentation of Airborne laser scanning point clouds. ISPRS Journal of Photogrammetry and Remote Sensing. 2021; 176 ():151-168.

Chicago/Turabian Style

Yaping Lin; George Vosselman; Yanpeng Cao; Michael Ying Yang. 2021. "Local and global encoder network for semantic segmentation of Airborne laser scanning point clouds." ISPRS Journal of Photogrammetry and Remote Sensing 176, no. : 151-168.

Journal article
Published: 09 February 2021 in International Journal of Applied Earth Observation and Geoinformation
Reads 0
Downloads 0

Forest managers and nature conservationists rely on precise mapping of single trees from remote sensing data for efficient estimation of forest attributes. In recent years, additional quantification of dead wood in particular has garnered interest. However, tree-level approaches utilizing segmented single trees are still limited in accuracy and their application is therefore mostly restricted to research studies. Furthermore, the combined classification of presegmented single trees with respect to tree species and health status is important for practical use but has been insufficiently investigated so far. Therefore, we introduce Silvi-Net, an approach based on convolutional neural networks (CNNs) fusing airborne lidar data and multispectral (MS) images for 3D object classification. First, we segment single 3D trees from the lidar point cloud, render multiple silhouette-like side-view images, and enrich them with calibrated laser echo characteristics. Second, projected outlines of the segmented trees are used to crop and mask the MS orthomosaic and to generate MS image patches for each tree. Third, we independently train two ResNet-18 networks to learn meaningful features from both datasets. This optimization process is based on pretrained CNN weights and recursive retraining of model parameters. Finally, the extracted features are fused for a final classification step based on a standard multi-layer perceptron and majority voting. We analyzed the network’s performance on data captured in two study areas, the Chernobyl Exclusion Zone (ChEZ) and the Bavarian Forest National Park (BFNP). For both study areas, the lidar point density was approximately 55 points/m2 and the ground sampling distance values of the true orthophotos were 10 cm (ChEZ) and 20 cm (BFNP). In general, the trained models showed high generalization capacity on independent test data, achieving an overall accuracy (OA) of 96.1% for the classification of pines, birches, alders, and dead trees (ChEZ) - and 91.5% for coniferous, deciduous, snags, and dead trees (BFNP). Interestingly, lidar-based imagery increased the OA by 2.5% (ChEZ) and 5.9% (BFNP) compared to experiments only utilizing MS imagery. Moreover, Silvi-Net also demonstrated superior OA compared to the baseline method PointNet++ by 11.3% (ChEZ) and 2.2% (BFNP). Overall, the effectiveness of our approach was proven using 2D and 3D datasets from two natural forest areas (400–530 trees/ha), acquired with different sensor models, and varying geometric and spectral resolution. Using the technique of transfer learning, Silvi-Net facilitates fast model convergence, even for datasets with a reduced number of samples. Consequently, operators can generate reliable maps that are of major importance in applications such as automated inventory and monitoring projects.

ACS Style

S. Briechle; P. Krzystek; G. Vosselman. Silvi-Net – A dual-CNN approach for combined classification of tree species and standing dead trees from remote sensing data. International Journal of Applied Earth Observation and Geoinformation 2021, 98, 102292 .

AMA Style

S. Briechle, P. Krzystek, G. Vosselman. Silvi-Net – A dual-CNN approach for combined classification of tree species and standing dead trees from remote sensing data. International Journal of Applied Earth Observation and Geoinformation. 2021; 98 ():102292.

Chicago/Turabian Style

S. Briechle; P. Krzystek; G. Vosselman. 2021. "Silvi-Net – A dual-CNN approach for combined classification of tree species and standing dead trees from remote sensing data." International Journal of Applied Earth Observation and Geoinformation 98, no. : 102292.

Journal article
Published: 17 January 2021 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0

Multipath effects and signal obstruction by buildings in urban canyons can lead to inaccurate GNSS measurements and therefore errors in the estimated trajectory of Mobile Laser Scanning (MLS) systems; consequently, derived point clouds are distorted and lose spatial consistency. We obtain decimetre-level trajectory accuracy making use of corresponding points between the MLS data and aerial images with accurate exterior orientations instead of using ground control points. The MLS trajectory is estimated based on observation equations resulting from these corresponding points, the original IMU observations, and soft constraints on the pitch and yaw rotations of the vehicle. We analyse the quality of the trajectory enhancement under several conditions where the experiments were designed to test the influence of the number and quality of corresponding points and to test different settings for a B-spline representation of the vehicle trajectory. The method was tested on two independently acquired MLS datasets in Rotterdam by enhancing the trajectories and evaluating them using checkpoints. The RMSE values of the original GNSS/IMU based Kalman filter results at the checkpoints were 0.26 m, 0.30 m, and 0.47 m for the X-, Y- and Z-coordinates in the first dataset and 1.10 m, 1.51 m, and 1.81 m in the second dataset. The latter dataset was recorded with a lower quality IMU in an area with taller buildings. After trajectory adjustment these RMSE values were reduced to 0.09 m, 0.11 m, and 0.16 m for the first dataset and 0.12 m, 0.14 m, and 0.18 m for the second dataset. The results confirmed that, if sufficient tie points between the point cloud and aerial imagery are available, the method supports geo-referencing of MLS point clouds in urban canyons with a near-decimetre accuracy.

ACS Style

Zille Hussnain; Sander Oude Elberink; George Vosselman. Enhanced trajectory estimation of mobile laser scanners using aerial images. ISPRS Journal of Photogrammetry and Remote Sensing 2021, 173, 66 -78.

AMA Style

Zille Hussnain, Sander Oude Elberink, George Vosselman. Enhanced trajectory estimation of mobile laser scanners using aerial images. ISPRS Journal of Photogrammetry and Remote Sensing. 2021; 173 ():66-78.

Chicago/Turabian Style

Zille Hussnain; Sander Oude Elberink; George Vosselman. 2021. "Enhanced trajectory estimation of mobile laser scanners using aerial images." ISPRS Journal of Photogrammetry and Remote Sensing 173, no. : 66-78.

Journal article
Published: 21 December 2020 in Remote Sensing
Reads 0
Downloads 0

We present an unsupervised deep learning approach for post-disaster building damage detection that can transfer to different typologies of damage or geographical locations. Previous advances in this direction were limited by insufficient qualitative training data. We propose to use a state-of-the-art Anomaly Detecting Generative Adversarial Network (ADGAN) because it only requires pre-event imagery of buildings in their undamaged state. This approach aids the post-disaster response phase because the model can be developed in the pre-event phase and rapidly deployed in the post-event phase. We used the xBD dataset, containing pre- and post- event satellite imagery of several disaster-types, and a custom made Unmanned Aerial Vehicle (UAV) dataset, containing post-earthquake imagery. Results showed that models trained on UAV-imagery were capable of detecting earthquake-induced damage. The best performing model for European locations obtained a recall, precision and F1-score of 0.59, 0.97 and 0.74, respectively. Models trained on satellite imagery were capable of detecting damage on the condition that the training dataset was void of vegetation and shadows. In this manner, the best performing model for (wild)fire events yielded a recall, precision and F1-score of 0.78, 0.99 and 0.87, respectively. Compared to other supervised and/or multi-epoch approaches, our results are encouraging. Moreover, in addition to image classifications, we show how contextual information can be used to create detailed damage maps without the need of a dedicated multi-task deep learning framework. Finally, we formulate practical guidelines to apply this single-epoch and unsupervised method to real-world applications.

ACS Style

Sofia Tilon; Francesco Nex; Norman Kerle; George Vosselman. Post-Disaster Building Damage Detection from Earth Observation Imagery using Unsupervised and Transferable Anomaly Detecting Generative Adversarial Networks. Remote Sensing 2020, 12, 4193 .

AMA Style

Sofia Tilon, Francesco Nex, Norman Kerle, George Vosselman. Post-Disaster Building Damage Detection from Earth Observation Imagery using Unsupervised and Transferable Anomaly Detecting Generative Adversarial Networks. Remote Sensing. 2020; 12 (24):4193.

Chicago/Turabian Style

Sofia Tilon; Francesco Nex; Norman Kerle; George Vosselman. 2020. "Post-Disaster Building Damage Detection from Earth Observation Imagery using Unsupervised and Transferable Anomaly Detecting Generative Adversarial Networks." Remote Sensing 12, no. 24: 4193.

Journal article
Published: 14 September 2020 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0

Supervised training of a deep neural network for semantic segmentation of point clouds requires a large amount of labelled data. Nowadays, it is easy to acquire a huge number of points with high density in large-scale areas using current LiDAR and photogrammetric techniques. However it is extremely time-consuming to manually label point clouds for model training. In this paper, we propose an active and incremental learning strategy to iteratively query informative point cloud data for manual annotation and the model is continuously trained to adapt to the newly labelled samples in each iteration. We evaluate the data informativeness step by step and effectively and incrementally enrich the model knowledge. The data informativeness is estimated by two data dependent uncertainty metrics (point entropy and segment entropy) and one model dependent metric (mutual information). The proposed methods are tested on two datasets. The results indicate the proposed uncertainty metrics can enrich current model knowledge by selecting informative samples, such as considering points with difficult class labels and choosing target objects with various geometries in the labelled training pool. Compared to random selection, our metrics provide valuable information to significantly reduce the labelled training samples. In contrast with training from scratch, the incremental fine-tuning strategy significantly save the training time.

ACS Style

Yaping Lin; George Vosselman; Yanpeng Cao; Michael Ying Yang. Active and incremental learning for semantic ALS point cloud segmentation. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 169, 73 -92.

AMA Style

Yaping Lin, George Vosselman, Yanpeng Cao, Michael Ying Yang. Active and incremental learning for semantic ALS point cloud segmentation. ISPRS Journal of Photogrammetry and Remote Sensing. 2020; 169 ():73-92.

Chicago/Turabian Style

Yaping Lin; George Vosselman; Yanpeng Cao; Michael Ying Yang. 2020. "Active and incremental learning for semantic ALS point cloud segmentation." ISPRS Journal of Photogrammetry and Remote Sensing 169, no. : 73-92.

Journal article
Published: 03 August 2020 in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Reads 0
Downloads 0

Knowledge of tree species mapping and of dead wood in particular is fundamental to managing our forests. Although individual tree-based approaches using lidar can successfully distinguish between deciduous and coniferous trees, the classification of multiple tree species is still limited in accuracy. Moreover, the combined mapping of standing dead trees after pest infestation is becoming increasingly important. New deep learning methods outperform baseline machine learning approaches and promise a significant accuracy gain for tree mapping. In this study, we performed a classification of multiple tree species (pine, birch, alder) and standing dead trees with crowns using the 3D deep neural network (DNN) PointNet++ along with UAV-based lidar data and multispectral (MS) imagery. Aside from 3D geometry, we also integrated laser echo pulse width values and MS features into the classification process. In a preprocessing step, we generated the 3D segments of single trees using a 3D detection method. Our approach achieved an overall accuracy (OA) of 90.2% and was clearly superior to a baseline method using a random forest classifier and handcrafted features (OA = 85.3%). All in all, we demonstrate that the performance of the 3D DNN is highly promising for the classification of multiple tree species and standing dead trees in practice.

ACS Style

S. Briechle; P. Krzystek; G. Vosselman. CLASSIFICATION OF TREE SPECIES AND STANDING DEAD TREES BY FUSING UAV-BASED LIDAR DATA AND MULTISPECTRAL IMAGERY IN THE 3D DEEP NEURAL NETWORK POINTNET++. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2020, V-2-2020, 203 -210.

AMA Style

S. Briechle, P. Krzystek, G. Vosselman. CLASSIFICATION OF TREE SPECIES AND STANDING DEAD TREES BY FUSING UAV-BASED LIDAR DATA AND MULTISPECTRAL IMAGERY IN THE 3D DEEP NEURAL NETWORK POINTNET++. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2020; V-2-2020 ():203-210.

Chicago/Turabian Style

S. Briechle; P. Krzystek; G. Vosselman. 2020. "CLASSIFICATION OF TREE SPECIES AND STANDING DEAD TREES BY FUSING UAV-BASED LIDAR DATA AND MULTISPECTRAL IMAGERY IN THE 3D DEEP NEURAL NETWORK POINTNET++." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020, no. : 203-210.

Journal article
Published: 03 August 2020 in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Reads 0
Downloads 0

With the development of LiDAR and photogrammetric techniques, more and more point clouds are available with high density and in large areas. Point cloud interpretation is an important step before many real applications like 3D city modelling. Many supervised machine learning techniques have been adapted to semantic point cloud segmentation, aiming to automatically label point clouds. Current deep learning methods have shown their potentials to produce high accuracy in semantic point cloud segmentation tasks. However, these supervised methods require a large amount of labelled data for proper model performance and good generalization. In practice, manual labelling of point clouds is very expensive and time-consuming. Active learning can iteratively select unlabelled samples for manual annotation based on current statistical models and then update the labelled data pool for next model training. In order to effectively label point clouds, we proposed a segment based active learning strategy to assess the informativeness of samples. Here, the proposed strategy uses 40% of the whole training dataset to achieve a mean IoU of 75.2% which is 99.1% of the accuracy in mIoU obtained from the model trained on the full dataset, while the baseline method using same amount of data only reaches 69.6% in mIoU corresponding to 90.9% of the accuracy in mIoU obtained from the model trained on the full dataset.

ACS Style

Y. Lin; G. Vosselman; Y. Cao; M. Y. Yang. EFFICIENT TRAINING OF SEMANTIC POINT CLOUD SEGMENTATION VIA ACTIVE LEARNING. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2020, V-2-2020, 243 -250.

AMA Style

Y. Lin, G. Vosselman, Y. Cao, M. Y. Yang. EFFICIENT TRAINING OF SEMANTIC POINT CLOUD SEGMENTATION VIA ACTIVE LEARNING. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2020; V-2-2020 ():243-250.

Chicago/Turabian Style

Y. Lin; G. Vosselman; Y. Cao; M. Y. Yang. 2020. "EFFICIENT TRAINING OF SEMANTIC POINT CLOUD SEGMENTATION VIA ACTIVE LEARNING." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020, no. : 243-250.

Journal article
Published: 03 August 2020 in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Reads 0
Downloads 0

Degradation and damage detection provides essential information to maintenance workers in routine monitoring and to first responders in post-disaster scenarios. Despite advance in Earth Observation (EO), image analysis and deep learning techniques, the quality and quantity of training data for deep learning is still limited. As a result, no robust method has been found yet that can transfer and generalize well over a variety of geographic locations and typologies of damages. Since damages can be seen as anomalies, occurring sparingly over time and space, we propose to use an anomaly detecting Generative Adversarial Network (GAN) to detect damages. The main advantages of using GANs are that only healthy unannotated images are needed, and that a variety of damages, including the never before seen damage, can be detected. In this study we aimed to investigate 1) the ability of anomaly detecting GANs to detect degradation (potholes and cracks) in asphalt road infrastructures using Mobile Mapper imagery and building damage (collapsed buildings, rubble piles) using post-disaster aerial imagery, and 2) the sensitivity of this method against various types of pre-processing. Our results show that we can detect damages in urban scenes at satisfying levels but not on asphalt roads. Future work will investigate how to further classify the found damages and how to improve damage detection for asphalt roads.

ACS Style

S. M. Tilon; F. Nex; D. Duarte; N. Kerle; G. Vosselman. INFRASTRUCTURE DEGRADATION AND POST-DISASTER DAMAGE DETECTION USING ANOMALY DETECTING GENERATIVE ADVERSARIAL NETWORKS. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2020, V-2-2020, 573 -582.

AMA Style

S. M. Tilon, F. Nex, D. Duarte, N. Kerle, G. Vosselman. INFRASTRUCTURE DEGRADATION AND POST-DISASTER DAMAGE DETECTION USING ANOMALY DETECTING GENERATIVE ADVERSARIAL NETWORKS. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2020; V-2-2020 ():573-582.

Chicago/Turabian Style

S. M. Tilon; F. Nex; D. Duarte; N. Kerle; G. Vosselman. 2020. "INFRASTRUCTURE DEGRADATION AND POST-DISASTER DAMAGE DETECTION USING ANOMALY DETECTING GENERATIVE ADVERSARIAL NETWORKS." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020, no. : 573-582.

Journal article
Published: 03 August 2020 in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Reads 0
Downloads 0

In recent years, the importance of indoor mapping increased in a wide range of applications, such as facility management and mapping hazardous sites. The essential technique behind indoor mapping is simultaneous localization and mapping (SLAM) because SLAM offers suitable positioning estimates in environments where satellite positioning is not available. State-of-the-art indoor mobile mapping systems employ Visual-based SLAM or LiDAR-based SLAM. However, Visual-based SLAM is sensitive to textureless environments and, similarly, LiDAR-based SLAM is sensitive to a number of pose configurations where the geometry of laser observations is not strong enough to reliably estimate the six-degree-of-freedom (6DOF) pose of the system. In this paper, we present different strategies that utilize the benefits of the inertial measurement unit (IMU) in the pose estimation and support LiDAR-based SLAM in overcoming these problems. The proposed strategies have been implemented and tested using different datasets and our experimental results demonstrate that the proposed methods do indeed overcome these problems. We conclude that IMU observations increase the robustness of SLAM, which is expected, but also that the best reconstruction accuracy is obtained not with a blind use of all observations but by filtering the measurements with a proposed reliability measure. To this end, our results show promising improvements in reconstruction accuracy.

ACS Style

S. Karam; Ville Lehtola; G. Vosselman. STRATEGIES TO INTEGRATE IMU AND LIDAR SLAM FOR INDOOR MAPPING. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2020, V-1-2020, 223 -230.

AMA Style

S. Karam, Ville Lehtola, G. Vosselman. STRATEGIES TO INTEGRATE IMU AND LIDAR SLAM FOR INDOOR MAPPING. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2020; V-1-2020 ():223-230.

Chicago/Turabian Style

S. Karam; Ville Lehtola; G. Vosselman. 2020. "STRATEGIES TO INTEGRATE IMU AND LIDAR SLAM FOR INDOOR MAPPING." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2020, no. : 223-230.

Journal article
Published: 01 August 2020 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0

The severe accident at the Chornobyl Nuclear Power Plant (ChNPP) in 1986 resulted in extraordinary contamination of the surrounding territory, which necessitated the creation of the Chornobyl Exclusion Zone (ChEZ). During the accident, liquidation materials contaminated by radioactive fallout (e.g., contaminated soil and trees) were buried in so-called Radioactive Waste Temporary Storage Places (RWTSPs). The exact locations of these burials were not always sufficiently documented. However, for safety management, including eventual remediation works, it is crucial to know their locations and rely on precise hazard maps. Over the past 34 years, most of these so-called trenches and clamps have been exposed to natural processes. In addition to settlement and erosion, they have been overgrown with dense vegetation. To date, more than 700 burials have been thoroughly investigated, but a large number of burial sites (approximately 300) are still unknown. In the past, numerous burials were identified based on settlement or elevation in the decimeter range, and vegetation anomalies that tend to appear in the immediate vicinity. Nevertheless, conventional detection methods are time-, effort- and radiation dose-intensive. Airborne gamma spectrometry and visual ground inspection of morphology and vegetation can provide useful complementary information, but it is insufficient for precisely localizing unknown burial sites in many cases. Therefore, sensor technologies, such as UAV-based lidar and multispectral imagery, have been identified as potential alternative solutions. This paper presents a novel method to detect radioactive waste sites based on a set of prominent features generated from high-resolution remote sensing data in combination with a random forest (RF) classifier. Initially, we generate a digital terrain model (DTM) and 3D vegetation map from the data and derive tree-based features, including tree density, tree height, and tree species. Feature subsets compiled from normalized DTM height, fast point feature histograms (FPFH), and lidar metrics are then incorporated. Next, an RF classifier is trained on reference areas defined by visual interpretation of the DTM grid. A backward feature selection strategy reduces the feature space significantly and avoids overfitting. Feature relevance assessment clearly demonstrates that the members of all feature subsets represent a final list of the most prominent features. For three representative study areas, the mean overall accuracy (OA) is 98.2% when using area-wide test data. Cohens’ kappa coefficient κ ranges from 0.609 to 0.758. Additionally, we demonstrate the transferability of a trained classifier to an adjacent study area (OA = 93.6%, κ = 0.452). As expected, when utilizing the classifier on geometrically incorrect and incomplete reference data, which were generated from old maps and orthophotos based on visual inspection, the OA decreases significantly to 65.1% (κ = 0.481). Finally, detection is verified through 38 borings that successfully confirm the existence of previously unknown buried nuclear materials in classified areas. These results demonstrate that the proposed methodology is applicable to detecting area-wide unknown radioactive biomass burials in the ChEZ.

ACS Style

Sebastian Briechle; N. Molitor; P. Krzystek; G. Vosselman. Detection of radioactive waste sites in the Chornobyl exclusion zone using UAV-based lidar data and multispectral imagery. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 167, 345 -362.

AMA Style

Sebastian Briechle, N. Molitor, P. Krzystek, G. Vosselman. Detection of radioactive waste sites in the Chornobyl exclusion zone using UAV-based lidar data and multispectral imagery. ISPRS Journal of Photogrammetry and Remote Sensing. 2020; 167 ():345-362.

Chicago/Turabian Style

Sebastian Briechle; N. Molitor; P. Krzystek; G. Vosselman. 2020. "Detection of radioactive waste sites in the Chornobyl exclusion zone using UAV-based lidar data and multispectral imagery." ISPRS Journal of Photogrammetry and Remote Sensing 167, no. : 345-362.

Research article
Published: 25 May 2020 in GIScience & Remote Sensing
Reads 0
Downloads 0

Remote sensing images have long been recognized as useful for the detection of building damages, mainly due to their wide coverage, revisit capabilities and high spatial resolution. The majority of contributions aimed at identifying debris and rubble piles, as the main focus is to assess collapsed and partially collapsed structures. However, these approaches might not be optimal for the image classification of façade damages, where damages might appear in the form of spalling, cracks and collapse of small segments of the façade. A few studies focused their damage detection on the façades using only post-event images. Nonetheless, several studies achieved better performances in damage detection approaches when considering multi-temporal image data. Hence, in this work a multi-temporal façade damage detection is tested. The first objective is to optimally merge pre- and post-event aerial oblique imagery within a supervised classification approach using convolutional neural networks to detect façade damages. The second objective is related to the fact that façades are normally depicted in several views in aerial manned photogrammetric surveys; hence, different procedures combining these multi-view image data are also proposed and embedded in the image classification approach. Six multi-temporal approaches are compared against 3 mono-temporal ones. The results indicate the superiority of multi-temporal approaches (up to ~25% in f1-score) when compared to the mono-temporal ones. The best performing multi-temporal approach takes as input sextuples (3 views per epoch, per façade) within a late fusion approach to perform the image classification of façade damages. However, the detection of small damages, such as smaller cracks or smaller areas of spalling, remains challenging in this approach, mainly due to the low resolution (~0.14 m ground sampling distance) of the dataset used.

ACS Style

Diogo Duarte; Francesco Nex; Norman Kerle; George Vosselman. Detection of seismic façade damages with multi-temporal oblique aerial imagery. GIScience & Remote Sensing 2020, 57, 670 -686.

AMA Style

Diogo Duarte, Francesco Nex, Norman Kerle, George Vosselman. Detection of seismic façade damages with multi-temporal oblique aerial imagery. GIScience & Remote Sensing. 2020; 57 (5):670-686.

Chicago/Turabian Style

Diogo Duarte; Francesco Nex; Norman Kerle; George Vosselman. 2020. "Detection of seismic façade damages with multi-temporal oblique aerial imagery." GIScience & Remote Sensing 57, no. 5: 670-686.

Journal article
Published: 13 February 2020 in Automation in Construction
Reads 0
Downloads 0

During an emergency inside large buildings such as hospitals and shopping malls, the availability of up-to-date information is critical. One common source of information is the 2D layout of buildings and emergency exits. For most buildings, this information is represented as tangled floor plans, which in most cases are outdated. One solution to update the data of buildings after each renovation is to recreate 3D models of buildings in a quick and automatic approach. These 3D models provide proactively crucial building information in a digital format for first responders to be used in emergency cases. Thanks to advances in remote sensing, laser scanners can be used to generate an accurate spatial representation of buildings quickly. However, such devices provide point clouds, which are unstructured data. In this paper, we introduce a complete workflow that allows to generate 3D models from point clouds of buildings and extract fine-grained indoor navigation networks from those models, to support advanced path planning for disaster management and navigation of different types of agents. The process extracts structural elements of buildings such as walls, slabs, ceiling and openings, and reconstruct their volumetric shapes. Additionally, the furnishing elements in the input point clouds are identified and reconstructed as the obstacles. Stairs are also reconstructed to allow multistory navigation path planning. Our algorithm is fully 3D and can handle vertical and slanted structures. We test it on several real datasets, compared it to the state-of-the-art approaches and provide a process to check the consistency of the reconstruction, which allows in return to further improve its result.

ACS Style

Shayan Nikoohemat; Abdoulaye A. Diakité; Sisi Zlatanova; George Vosselman. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Automation in Construction 2020, 113, 103109 .

AMA Style

Shayan Nikoohemat, Abdoulaye A. Diakité, Sisi Zlatanova, George Vosselman. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Automation in Construction. 2020; 113 ():103109.

Chicago/Turabian Style

Shayan Nikoohemat; Abdoulaye A. Diakité; Sisi Zlatanova; George Vosselman. 2020. "Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management." Automation in Construction 113, no. : 103109.

Journal article
Published: 14 January 2020 in Remote Sensing
Reads 0
Downloads 0

There exists a demand for effective land administration systems that can support the protection of unrecorded land rights, thereby assisting to reduce poverty and support national development—in alignment with target 1.4 of UN Sustainable Development Goals (SDGs). It is estimated that only 30% of the world’s population has documented land rights recorded within a formal land administration system. In response, we developed, adapted, applied, and tested innovative remote sensing methodologies to support land rights mapping, including (1) a unique ontological analysis approach using smart sketch maps (SmartSkeMa); (2) unmanned aerial vehicle application (UAV); and (3) automatic boundary extraction (ABE) techniques, based on the acquired UAV images. To assess the applicability of the remote sensing methodologies several aspects were studied: (1) user needs, (2) the proposed methodologies responses to those needs, and (3) examine broader governance implications related to scaling the suggested approaches. The case location of Kajiado, Kenya is selected. A combination of quantitative and qualitative results resulted from fieldwork and workshops, taking into account both social and technical aspects. The results show that SmartSkeMa was potentially a versatile and community-responsive land data acquisition tool requiring little expertise to be used, UAVs were identified as having a high potential for creating up-to-date base maps able to support the current land administration system, and automatic boundary extraction is an effective method to demarcate physical and visible boundaries compared to traditional methodologies and manual delineation for land tenure mapping activities.

ACS Style

Mila Koeva; Claudia Stöcker; Sophie Crommelinck; Serene Ho; Malumbo Chipofya; Jan Sahib; Rohan Bennett; Jaap Zevenbergen; George Vosselman; Christiaan Lemmen; Joep Crompvoets; Ine Buntinx; Gordon Wayumba; Robert Wayumba; Peter Ochieng Odwe; George Ted Osewe; Beatrice Chika; Valerie Pattyn. Innovative Remote Sensing Methodologies for Kenyan Land Tenure Mapping. Remote Sensing 2020, 12, 273 .

AMA Style

Mila Koeva, Claudia Stöcker, Sophie Crommelinck, Serene Ho, Malumbo Chipofya, Jan Sahib, Rohan Bennett, Jaap Zevenbergen, George Vosselman, Christiaan Lemmen, Joep Crompvoets, Ine Buntinx, Gordon Wayumba, Robert Wayumba, Peter Ochieng Odwe, George Ted Osewe, Beatrice Chika, Valerie Pattyn. Innovative Remote Sensing Methodologies for Kenyan Land Tenure Mapping. Remote Sensing. 2020; 12 (2):273.

Chicago/Turabian Style

Mila Koeva; Claudia Stöcker; Sophie Crommelinck; Serene Ho; Malumbo Chipofya; Jan Sahib; Rohan Bennett; Jaap Zevenbergen; George Vosselman; Christiaan Lemmen; Joep Crompvoets; Ine Buntinx; Gordon Wayumba; Robert Wayumba; Peter Ochieng Odwe; George Ted Osewe; Beatrice Chika; Valerie Pattyn. 2020. "Innovative Remote Sensing Methodologies for Kenyan Land Tenure Mapping." Remote Sensing 12, no. 2: 273.

Journal article
Published: 29 November 2019 in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Reads 0
Downloads 0

Indoor mapping techniques are highly important in many applications, such as human navigation and indoor modelling. As satellite positioning systems do not work in indoor applications, several alternative navigational sensors and methods have been used to provide accurate indoor positioning for mapping purposes, such as inertial measurement units (IMUs) and simultaneous localisation and mapping algorithms (SLAM). In this paper, we investigate the benefits that the integration of a low-cost microelectromechanical system (MEMS) IMU can bring to a feature-based SLAM algorithm. Specifically, we utilize IMU data to predict the pose of our backpack indoor mobile mapping system to improve the SLAM algorithm. The experimental results show that using the proposed IMU integration method leads into a more robust data association between the measured points and the model planes. Notably, the number of points that are assigned to the model planes is increased, and the root mean square error (RMSE) of the residuals, i.e. distances between these measured points and the model planes, is decreased significantly from 1.8 cm to 1.3 cm.

ACS Style

S. Karam; Ville Lehtola; G. Vosselman. INTEGRATING A LOW-COST MEMS IMU INTO A LASER-BASED SLAM FOR INDOOR MOBILE MAPPING. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2019, XLII-2/W17, 149 -156.

AMA Style

S. Karam, Ville Lehtola, G. Vosselman. INTEGRATING A LOW-COST MEMS IMU INTO A LASER-BASED SLAM FOR INDOOR MOBILE MAPPING. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2019; XLII-2/W17 ():149-156.

Chicago/Turabian Style

S. Karam; Ville Lehtola; G. Vosselman. 2019. "INTEGRATING A LOW-COST MEMS IMU INTO A LASER-BASED SLAM FOR INDOOR MOBILE MAPPING." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17, no. : 149-156.

Journal article
Published: 25 October 2019 in Remote Sensing
Reads 0
Downloads 0

Cadastral boundaries are often demarcated by objects that are visible in remote sensing imagery. Indirect surveying relies on the delineation of visible parcel boundaries from such images. Despite advances in automated detection and localization of objects from images, indirect surveying is rarely automated and relies on manual on-screen delineation. We have previously introduced a boundary delineation workflow, comprising image segmentation, boundary classification and interactive delineation that we applied on Unmanned Aerial Vehicle (UAV) data to delineate roads. In this study, we improve each of these steps. For image segmentation, we remove the need to reduce the image resolution and we limit over-segmentation by reducing the number of segment lines by 80% through filtering. For boundary classification, we show how Convolutional Neural Networks (CNN) can be used for boundary line classification, thereby eliminating the previous need for Random Forest (RF) feature generation and thus achieving 71% accuracy. For interactive delineation, we develop additional and more intuitive delineation functionalities that cover more application cases. We test our approach on more varied and larger data sets by applying it to UAV and aerial imagery of 0.02–0.25 m resolution from Kenya, Rwanda and Ethiopia. We show that it is more effective in terms of clicks and time compared to manual delineation for parcels surrounded by visible boundaries. Strongest advantages are obtained for rural scenes delineated from aerial imagery, where the delineation effort per parcel requires 38% less time and 80% fewer clicks compared to manual delineation.

ACS Style

Sophie Crommelinck; Mila Koeva; Michael Ying Yang; George Vosselman. Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery. Remote Sensing 2019, 11, 2505 .

AMA Style

Sophie Crommelinck, Mila Koeva, Michael Ying Yang, George Vosselman. Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery. Remote Sensing. 2019; 11 (21):2505.

Chicago/Turabian Style

Sophie Crommelinck; Mila Koeva; Michael Ying Yang; George Vosselman. 2019. "Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery." Remote Sensing 11, no. 21: 2505.

Journal article
Published: 18 October 2019 in Remote Sensing
Reads 0
Downloads 0

Detecting topographic changes in an urban environment and keeping city-level point clouds up-to-date are important tasks for urban planning and monitoring. In practice, remote sensing data are often available only in different modalities for two epochs. Change detection between airborne laser scanning data and photogrammetric data is challenging due to the multi-modality of the input data and dense matching errors. This paper proposes a method to detect building changes between multimodal acquisitions. The multimodal inputs are converted and fed into a light-weighted pseudo-Siamese convolutional neural network (PSI-CNN) for change detection. Different network configurations and fusion strategies are compared. Our experiments on a large urban data set demonstrate the effectiveness of the proposed method. Our change map achieves a recall rate of 86.17%, a precision rate of 68.16%, and an F1-score of 76.13%. The comparison between Siamese architecture and feed-forward architecture brings many interesting findings and suggestions to the design of networks for multimodal data processing.

ACS Style

Zhenchao Zhang; George Vosselman; Markus Gerke; Claudio Persello; Devis Tuia; Michael Ying Yang. Detecting Building Changes between Airborne Laser Scanning and Photogrammetric Data. Remote Sensing 2019, 11, 2417 .

AMA Style

Zhenchao Zhang, George Vosselman, Markus Gerke, Claudio Persello, Devis Tuia, Michael Ying Yang. Detecting Building Changes between Airborne Laser Scanning and Photogrammetric Data. Remote Sensing. 2019; 11 (20):2417.

Chicago/Turabian Style

Zhenchao Zhang; George Vosselman; Markus Gerke; Claudio Persello; Devis Tuia; Michael Ying Yang. 2019. "Detecting Building Changes between Airborne Laser Scanning and Photogrammetric Data." Remote Sensing 11, no. 20: 2417.

Preprint
Published: 30 September 2019
Reads 0
Downloads 0

In recent years, the task of segmenting foreground objects from background in a video, i.e. video object segmentation (VOS), has received considerable attention. In this paper, we propose a single end-to-end trainable deep neural network, convolutional gated recurrent Mask-RCNN, for tackling the semi-supervised VOS task. We take advantage of both the instance segmentation network (Mask-RCNN) and the visual memory module (Conv-GRU) to tackle the VOS task. The instance segmentation network predicts masks for instances, while the visual memory module learns to selectively propagate information for multiple instances simultaneously, which handles the appearance change, the variation of scale and pose and the occlusions between objects. After offline and online training under purely instance segmentation losses, our approach is able to achieve satisfactory results without any post-processing or synthetic video data augmentation. Experimental results on DAVIS 2016 dataset and DAVIS 2017 dataset have demonstrated the effectiveness of our method for video object segmentation task.

ACS Style

Ye Lyu; George Vosselman; Gui-Song Xia; Michael Ying Yang. LIP: Learning Instance Propagation for Video Object Segmentation. 2019, 1 .

AMA Style

Ye Lyu, George Vosselman, Gui-Song Xia, Michael Ying Yang. LIP: Learning Instance Propagation for Video Object Segmentation. . 2019; ():1.

Chicago/Turabian Style

Ye Lyu; George Vosselman; Gui-Song Xia; Michael Ying Yang. 2019. "LIP: Learning Instance Propagation for Video Object Segmentation." , no. : 1.

Journal article
Published: 08 June 2019 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0

Road furniture recognition has become a prevalent issue in the past few years because of its great importance in smart cities and autonomous driving. Previous research has especially focussed on pole-like road furniture, such as traffic signs and lamp posts. Published methods have mainly classified road furniture as individual objects. However, most road furniture consists of a combination of classes, such as a traffic sign mounted on a street light pole. To tackle this problem, we propose a framework to interpret road furniture at a more detailed level. Instead of being interpreted as single objects, mobile laser scanning data of road furniture is decomposed in elements individually labelled as poles, and objects attached to them, such as, street lights, traffic signs and traffic lights. In our framework, we first detect road furniture from unorganised mobile laser scanning point clouds. Then detected road furniture is decomposed into poles and attachments (e.g. traffic signs). In the interpretation stage, we extract a set of features to classify the attachments by utilising a knowledge-driven method and four representative types of machine learning classifiers, which are random forest, support vector machine, Gaussian mixture model and naïve Bayes, to explore the optimal method. The designed features are the unary features of attachments and the spatial relations between poles and their attachments. Two experimental test sites in Enschede dataset and Saunalahti dataset were applied, and Saunalahti dataset was collected in two different epochs. In the experimental results, the random forest classifier outperforms the other methods, and the overall accuracy acquired is higher than 80% in Enschede test site and higher than 90% in both Saunalahti epochs. The designed features play an important role in the interpretation of road furniture. The results of two epochs in the same area prove the high reliability of our framework and demonstrate that our method achieves good transferability with an accuracy over 90% through employing the training data of one epoch to test the data in another epoch.

ACS Style

Fashuai Li; Matti Lehtomäki; Sander Oude Elberink; George Vosselman; Antero Kukko; Eetu Puttonen; Yuwei Chen; Juha Hyyppä. Semantic segmentation of road furniture in mobile laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing 2019, 154, 98 -113.

AMA Style

Fashuai Li, Matti Lehtomäki, Sander Oude Elberink, George Vosselman, Antero Kukko, Eetu Puttonen, Yuwei Chen, Juha Hyyppä. Semantic segmentation of road furniture in mobile laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing. 2019; 154 ():98-113.

Chicago/Turabian Style

Fashuai Li; Matti Lehtomäki; Sander Oude Elberink; George Vosselman; Antero Kukko; Eetu Puttonen; Yuwei Chen; Juha Hyyppä. 2019. "Semantic segmentation of road furniture in mobile laser scanning data." ISPRS Journal of Photogrammetry and Remote Sensing 154, no. : 98-113.