This page has only limited features, please log in for full access.

Unclaimed
Dong Chen
College of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

Dong Chen received a Bachelor's degree in Computer Science from Qingdao University of Science and Technology, Qingdao, China, a Master's degree in Cartography and Geographical Information Engineering from Xi'an University of Science and Technology, Xi'an, China, and a Ph.D. degree in Geographical Information Sciences from Beijing Normal University, Beijing, China. He is an Associate Professor at Nanjing Forestry University, Nanjing, China. He is also a Post-Doctoral Fellow with the Department of Geomatics Engineering, University of Calgary, Calgary, AB, Canada. His research interests include image and LiDAR-based segmentation and reconstruction, full-waveform LiDAR data processing, and related remote sensing applications in the field of forest ecosystems.

Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 10 August 2021 in Remote Sensing
Reads 0
Downloads 0

Point cloud classification is a key technology for point cloud applications and point cloud feature extraction is a key step towards achieving point cloud classification. Although there are many point cloud feature extraction and classification methods, and the acquisition of colored point cloud data has become easier in recent years, most point cloud processing algorithms do not consider the color information associated with the point cloud or do not make full use of the color information. Therefore, we propose a voxel-based local feature descriptor according to the voxel-based local binary pattern (VLBP) and fuses point cloud RGB information and geometric structure features using a random forest classifier to build a color point cloud classification algorithm. The proposed algorithm voxelizes the point cloud; divides the neighborhood of the center point into cubes (i.e., multiple adjacent sub-voxels); compares the gray information of the voxel center and adjacent sub-voxels; performs voxel global thresholding to convert it into a binary code; and uses a local difference sign–magnitude transform (LDSMT) to decompose the local difference of an entire voxel into two complementary components of sign and magnitude. Then, the VLBP feature of each point is extracted. To obtain more structural information about the point cloud, the proposed method extracts the normal vector of each point and the corresponding fast point feature histogram (FPFH) based on the normal vector. Finally, the geometric mechanism features (normal vector and FPFH) and color features (RGB and VLBP features) of the point cloud are fused, and a random forest classifier is used to classify the color laser point cloud. The experimental results show that the proposed algorithm can achieve effective point cloud classification for point cloud data from different indoor and outdoor scenes, and the proposed VLBP features can improve the accuracy of point cloud classification.

ACS Style

Yong Li; Yinzheng Luo; Xia Gu; Dong Chen; Fang Gao; Feng Shuang. Point Cloud Classification Algorithm Based on the Fusion of the Local Binary Pattern Features and Structural Features of Voxels. Remote Sensing 2021, 13, 3156 .

AMA Style

Yong Li, Yinzheng Luo, Xia Gu, Dong Chen, Fang Gao, Feng Shuang. Point Cloud Classification Algorithm Based on the Fusion of the Local Binary Pattern Features and Structural Features of Voxels. Remote Sensing. 2021; 13 (16):3156.

Chicago/Turabian Style

Yong Li; Yinzheng Luo; Xia Gu; Dong Chen; Fang Gao; Feng Shuang. 2021. "Point Cloud Classification Algorithm Based on the Fusion of the Local Binary Pattern Features and Structural Features of Voxels." Remote Sensing 13, no. 16: 3156.

Journal article
Published: 09 August 2021 in Remote Sensing
Reads 0
Downloads 0

This paper proposes a building façade contouring method from LiDAR (Light Detection and Ranging) scans and photogrammetric point clouds. To this end, we calculate the confidence property at multiple scales for an individual point cloud to measure the point cloud’s quality. The confidence property is utilized in the definition of the gradient for each point. We encode the individual point gradient structure tensor, whose eigenvalues reflect the gradient variations in the local neighborhood areas. The critical point clouds representing the building façade and rooftop (if, of course, such rooftops exist) contours are then extracted by jointly analyzing dual-thresholds of the gradient and gradient structure tensor. Based on the requirements of compact representation, the initial obtained critical points are finally downsampled, thereby achieving a tradeoff between the accurate geometry and abstract representation at a reasonable level. Various experiments using representative buildings in Semantic3D benchmark and other ubiquitous point clouds from ALS DublinCity and Dutch AHN3 datasets, MLS TerraMobilita/iQmulus 3D urban analysis benchmark, UAV-based photogrammetric dataset, and GeoSLAM ZEB-HORIZON scans have shown that the proposed method generates building contours that are accurate, lightweight, and robust to ubiquitous point clouds. Two comparison experiments also prove the superiority of the proposed method in terms of topological correctness, geometric accuracy, and representation compactness.

ACS Style

Dong Chen; Jing Li; Shaoning Di; Jiju Peethambaran; Guiqiu Xiang; Lincheng Wan; Xianghong Li. Critical Points Extraction from Building Façades by Analyzing Gradient Structure Tensor. Remote Sensing 2021, 13, 3146 .

AMA Style

Dong Chen, Jing Li, Shaoning Di, Jiju Peethambaran, Guiqiu Xiang, Lincheng Wan, Xianghong Li. Critical Points Extraction from Building Façades by Analyzing Gradient Structure Tensor. Remote Sensing. 2021; 13 (16):3146.

Chicago/Turabian Style

Dong Chen; Jing Li; Shaoning Di; Jiju Peethambaran; Guiqiu Xiang; Lincheng Wan; Xianghong Li. 2021. "Critical Points Extraction from Building Façades by Analyzing Gradient Structure Tensor." Remote Sensing 13, no. 16: 3146.

Journal article
Published: 25 March 2021 in IEEE Transactions on Geoscience and Remote Sensing
Reads 0
Downloads 0

Sliding-window-based low-rank matrix approximation (LRMA) is a technique widely used in hyperspectral images (HSIs) denoising or completion. However, the uncertainty quantification of the restored HSI has not been addressed to date. Accurate uncertainty quantification of the denoised HSI facilitates applications such as multisource or multiscale data fusion, data assimilation, and product uncertainty quantification since these applications require an accurate approach to describe the statistical distributions of the input data. Therefore, we propose a prior-free closed-form element-wise uncertainty quantification method for LRMA-based HSI restoration. Our closed-form algorithm overcomes the difficulty of handling uncertainty in HSI patch mixing caused by the sliding-window strategy used in the conventional LRMA process. The proposed approach only requires the uncertainty of the observed HSI and provides the uncertainty result relatively rapidly and with similar computational complexity as the LRMA technique. We conduct extensive experiments to validate the estimation accuracy of the proposed closed-form uncertainty approach. The method is robust to at least 10% random impulse noise at the cost of 10%-20% of additional processing time compared to the LRMA. The experiments indicate that the proposed closed-form uncertainty quantification method is more applicable to real-world applications than the baseline Monte Carlo test, which is computationally expensive.

ACS Style

Jingwei Song; Shaobo Xia; Jun Wang; Mitesh Patel; Dong Chen. Uncertainty Quantification of Hyperspectral Image Denoising Frameworks Based on Sliding-Window Low-Rank Matrix Approximation. IEEE Transactions on Geoscience and Remote Sensing 2021, PP, 1 -12.

AMA Style

Jingwei Song, Shaobo Xia, Jun Wang, Mitesh Patel, Dong Chen. Uncertainty Quantification of Hyperspectral Image Denoising Frameworks Based on Sliding-Window Low-Rank Matrix Approximation. IEEE Transactions on Geoscience and Remote Sensing. 2021; PP (99):1-12.

Chicago/Turabian Style

Jingwei Song; Shaobo Xia; Jun Wang; Mitesh Patel; Dong Chen. 2021. "Uncertainty Quantification of Hyperspectral Image Denoising Frameworks Based on Sliding-Window Low-Rank Matrix Approximation." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-12.

Journal article
Published: 11 March 2021 in IEEE Geoscience and Remote Sensing Letters
Reads 0
Downloads 0

As many LiDAR point cloud processing steps, such as reconstruction, are often time- and memory-consuming, dividing LiDAR point clouds into subregions is common and necessary during preprocessing. However, the existing data dividing methods rely on tedious manual work or regular grids and result in oversegmentation around cutting lines. In this letter, we propose a new gap-based data dividing method for various LiDAR point clouds that can minimize the intersections between cutting lines and objects. The basic idea is to find a set of optimal paths that consist of gaps between objects as potential cutting lines. The experiments and comparisons in three data sets demonstrate that the proposed method is much better than the baseline method in terms visual inspection and cutting line quality.

ACS Style

Shaobo Xia; Sheng Nie; Pu Wang; Dong Chen; Sheng Xu; Cheng Wang. A Gap-Based Method for LiDAR Point Cloud Division. IEEE Geoscience and Remote Sensing Letters 2021, PP, 1 -5.

AMA Style

Shaobo Xia, Sheng Nie, Pu Wang, Dong Chen, Sheng Xu, Cheng Wang. A Gap-Based Method for LiDAR Point Cloud Division. IEEE Geoscience and Remote Sensing Letters. 2021; PP (99):1-5.

Chicago/Turabian Style

Shaobo Xia; Sheng Nie; Pu Wang; Dong Chen; Sheng Xu; Cheng Wang. 2021. "A Gap-Based Method for LiDAR Point Cloud Division." IEEE Geoscience and Remote Sensing Letters PP, no. 99: 1-5.

Letter
Published: 20 January 2021 in Remote Sensing
Reads 0
Downloads 0

Tree localization in point clouds of forest scenes is critical in the forest inventory. Most of the existing methods proposed for TLS forest data are based on model fitting or point-wise features which are time-consuming, sensitive to data incompleteness and complex tree structures. Furthermore, these methods often require lots of preprocessing such as ground filtering and noise removal. The fast and easy-to-use top-based methods that are widely applied in processing ALS point clouds are not applicable in localizing trees in TLS point clouds due to the data incompleteness and complex canopy structures. The objective of this study is to make the top-based methods applicable to TLS forest point clouds. To this end, a novel point cloud transformation is presented, which enhances the visual salience of tree instances and makes the top-based methods adapting to TLS forest scenes. The input for the proposed method is the raw point clouds and no other pre-processing steps are needed. The new method is tested on an international benchmark and the experimental results demonstrate its necessity and effectiveness. Finally, the proposed method has the potential to benefit other object localization tasks in different scenes based on detailed analysis and tests.

ACS Style

Shaobo Xia; Dong Chen; Jiju Peethambaran; Pu Wang; Sheng Xu. Point Cloud Inversion: A Novel Approach for the Localization of Trees in Forests from TLS Data. Remote Sensing 2021, 13, 338 .

AMA Style

Shaobo Xia, Dong Chen, Jiju Peethambaran, Pu Wang, Sheng Xu. Point Cloud Inversion: A Novel Approach for the Localization of Trees in Forests from TLS Data. Remote Sensing. 2021; 13 (3):338.

Chicago/Turabian Style

Shaobo Xia; Dong Chen; Jiju Peethambaran; Pu Wang; Sheng Xu. 2021. "Point Cloud Inversion: A Novel Approach for the Localization of Trees in Forests from TLS Data." Remote Sensing 13, no. 3: 338.

Journal article
Published: 11 January 2021 in IEEE Transactions on Geoscience and Remote Sensing
Reads 0
Downloads 0

Metro subway systems with underground tunnels form the backbone of urban transportations and therefore, accurate monitoring and maintenance of such subway systems are extremely necessary for a hassle-free daily commutation of billions of people. Though 3-D models of tunnels are widely used for the deformation monitoring of such subway tunnels, existing model-based tunnel monitoring systems rely on coarse geometric models and hence fail to capture complete tunnel health information. We present a two-stage algorithm to create high-fidelity geometric models of tunnel lining from Terrestrial Laser Scanning (TLS) point clouds. Tunnel geometry, defined at the detailed block entity level, is constructed through a data-driven block segmentation algorithm and a model-driven assembly technique. In our approach, the 3-D tunnel block segmentation problem has been translated into a bolt and lining joint recognition problem from 2-D images unfolded from the 3-D scans. The segmented 3-D blocks are matched with a set of predefined 3-D templates from a primitive library via a constraint total least squares matching method and the matched 3-D templates are assembled to create the final watertight tunnel model. The proposed tunnel modeling method has been comprehensively evaluated on Changzhou, Nanjing, and Wuhan tunnel data sets in terms of outliers, missing data, point density, topological representation, robustness, and geometric accuracy. The experiments on Nanjing and Changzhou metro tunnels show that the geometric model fitting incurs an error of only 7 mm, which is almost consistent with a mean density of 6 mm of these two data sets. Experimental results validate the advantages and potentials of the proposed tunnel modeling method.

ACS Style

Zhen Cao; Dong Chen; Jiju Peethambaran; Zhenxin Zhang; Shaobo Xia; Liqiang Zhang. Tunnel Reconstruction With Block Level Precision by Combining Data-Driven Segmentation and Model-Driven Assembly. IEEE Transactions on Geoscience and Remote Sensing 2021, PP, 1 -20.

AMA Style

Zhen Cao, Dong Chen, Jiju Peethambaran, Zhenxin Zhang, Shaobo Xia, Liqiang Zhang. Tunnel Reconstruction With Block Level Precision by Combining Data-Driven Segmentation and Model-Driven Assembly. IEEE Transactions on Geoscience and Remote Sensing. 2021; PP (99):1-20.

Chicago/Turabian Style

Zhen Cao; Dong Chen; Jiju Peethambaran; Zhenxin Zhang; Shaobo Xia; Liqiang Zhang. 2021. "Tunnel Reconstruction With Block Level Precision by Combining Data-Driven Segmentation and Model-Driven Assembly." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-20.

Journal article
Published: 09 June 2020 in IEEE Transactions on Geoscience and Remote Sensing
Reads 0
Downloads 0

Classification of airborne laser scanning (ALS) point clouds is needed in digital cities and 3-D modeling. To efficiently recognize objects in ALS point clouds, we propose a novel hierarchical aggregated deep feature representation method, which can adequately employ spatial association of multilevel structures and deep feature discrimination. In our method, a 3-D deep learning model is constructed to represent the discriminative feature of each point cluster in a hierarchical structure by decreasing the within-class distance and increasing the between-class distance. Our method aggregates the discriminative deep features in different levels into a hierarchical aggregated deep feature that considers the spatial hierarchy and feature distinctiveness. Lastly, we build a multichannel 1-D convolutional neural network to classify the unknown points. Our tests demonstrate that the proposed hierarchical aggregated deep feature method can enhance point cloud classification results. Comparing with seven state-of-the-art methods, those results also verified the superior performance of our method.

ACS Style

Zhenxin Zhang; Lan Sun; Ruofei Zhong; Dong Chen; Liqiang Zhang; Xiaojuan Li; Qiang Wang; Siyun Chen. Hierarchical Aggregated Deep Features for ALS Point Cloud Classification. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 1686 -1699.

AMA Style

Zhenxin Zhang, Lan Sun, Ruofei Zhong, Dong Chen, Liqiang Zhang, Xiaojuan Li, Qiang Wang, Siyun Chen. Hierarchical Aggregated Deep Features for ALS Point Cloud Classification. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (2):1686-1699.

Chicago/Turabian Style

Zhenxin Zhang; Lan Sun; Ruofei Zhong; Dong Chen; Liqiang Zhang; Xiaojuan Li; Qiang Wang; Siyun Chen. 2020. "Hierarchical Aggregated Deep Features for ALS Point Cloud Classification." IEEE Transactions on Geoscience and Remote Sensing 59, no. 2: 1686-1699.

Journal article
Published: 08 June 2020 in IEEE Transactions on Geoscience and Remote Sensing
Reads 0
Downloads 0

Airborne light detection and ranging (LiDAR) data are widely applied in building reconstruction, with studies reporting success in typical buildings. However, the reconstruction of curved buildings remains an open research problem. To this end, we propose a new framework for curved building reconstruction via assembling and deforming geometric primitives. The input LiDAR point clouds are first converted into contours where individual buildings are identified. After recognizing geometric units (primitives) from building contours, we get initial models by matching the basic geometric primitives to these primitives. To polish assembly models, we employ a warping field for model refinements. Specifically, an embedded deformation (ED) graph is constructed via downsampling the initial model. Then, the point to model displacements is minimized by adjusting node parameters in the ED graph based on our objective function. The presented framework is validated on several highly curved buildings collected by various LiDAR in different cities. The experimental results, as well as accuracy comparison, demonstrate the advantage and effectiveness of our method. The new insight attributes to an efficient reconstruction manner. Moreover, we prove that the primitive-based framework significantly reduces the data storage to 10%-20% of classical mesh models.

ACS Style

Jingwei Song; Shaobo Xia; Jun Wang; Dong Chen. Curved Buildings Reconstruction From Airborne LiDAR Data by Matching and Deforming Geometric Primitives. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 1660 -1674.

AMA Style

Jingwei Song, Shaobo Xia, Jun Wang, Dong Chen. Curved Buildings Reconstruction From Airborne LiDAR Data by Matching and Deforming Geometric Primitives. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (2):1660-1674.

Chicago/Turabian Style

Jingwei Song; Shaobo Xia; Jun Wang; Dong Chen. 2020. "Curved Buildings Reconstruction From Airborne LiDAR Data by Matching and Deforming Geometric Primitives." IEEE Transactions on Geoscience and Remote Sensing 59, no. 2: 1660-1674.

Review
Published: 31 January 2020 in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Reads 0
Downloads 0

Geometric primitives that consist of a group of discrete points may be viewed as one kind of abstraction and representation of lidar data at the entity level. In recent years, many efforts from different scientific communities, such as photogrammetry, computer vision and computer graphics, have been made into geometric primitive detection, regularization, and in-depth applications. The most recent lidar-based surveys focus only on reconstruction, object segmentation, and recognition, as well as the data processing techniques based on a specific platform such as mobile LiDAR. However, in this paper, lidar point clouds are understood from a new perspective, i.e., geometric primitives embedded in versatile objects in physical world. We categorize geometric primitives into two classes: shape primitives, e.g., lines, surfaces, and volumetric shapes, as well as structure primitives, represented by skeletons and edges. Interpretations of geometric primitive from multiple disciplines try to convey the significance of geometric primitives, the latest processing techniques regarding geometric primitives, and their potential possibilities in the context of lidar point clouds. To this end, applications of these primitives are reviewed with an emphasis on object extraction and reconstruction to clearly show the significances of this paper. Next, we survey and compare methods for geometric primitive extraction and then survey primitive regularization methods that add real-world geometry constrains to detected primitives. Finally, we summarize the problems and challenges and describe possible future for primitive extraction methods that can be achieved globally optimal results efficiently, even with disorganized, uneven, noisy, incomplete, and large-scale lidar point clouds.

ACS Style

Shaobo Xia; Dong Chen; Ruisheng Wang; Jonathan Li; Xinchang Zhang. Geometric Primitives in LiDAR Point Clouds: A Review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2020, 13, 685 -707.

AMA Style

Shaobo Xia, Dong Chen, Ruisheng Wang, Jonathan Li, Xinchang Zhang. Geometric Primitives in LiDAR Point Clouds: A Review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2020; 13 (99):685-707.

Chicago/Turabian Style

Shaobo Xia; Dong Chen; Ruisheng Wang; Jonathan Li; Xinchang Zhang. 2020. "Geometric Primitives in LiDAR Point Clouds: A Review." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13, no. 99: 685-707.

Journal article
Published: 01 January 2020 in IEEE Access
Reads 0
Downloads 0

Large-scale point clouds scanned by light detection and ranging (lidar) sensors provide detailed geometric characteristics of the indoor and outdoor scenes due to the provision of 3D structural data colored by intensity/reflectance information. The semantic segmentation of large-scale point clouds is a crucial step for an in-depth understanding of complex scenes. Of late, although a large number of point cloud semantic segmentation algorithms have been proposed, semantic segmentation methods are still far from being satisfactory in terms of precision and accuracy of large-scale point clouds. For machine learning (ML) and deep learning (DL) methodologies, the semantic segmentation is largely influenced by the quality of training sets and methods themselves. Therefore, we construct a new point cloud dataset, namely CSPC-Dataset (Complex Scene Point Cloud Dataset) for large-scale scene semantic segmentation. CSPC-Dataset point clouds are acquired by a wearable laser mobile mapping robot. It covers five complex urban and rural scenes and mainly includes six types of objects, i.e., ground, car, building, vegetation, bridge, and pole. It provides large-scale outdoor scenes, which has advantages such as the scene more complete, point density relatively uniform, diversity and complexity of objects and the high discrepancy between different scenes. Based on the CSPC-Dataset, we construct a new benchmark, which includes approximately 68 million points with explicit semantic labels. To extend the dataset into a wide range of applications, this paper provides the semantic segmentation results and comparative analysis of 7 baseline methods based on CSPC-Dataset. In the experiment part, three groups of experiments are conducted for benchmarking, which offers an effective way to make comparisons with different point-labeling algorithms. The labeling results have shown that the highest Intersection over Union (IoU) of pole, ground, building, car, vegetation, and bridge for all benchmarks is 36.0%, 97.8%, 93.7%, 65.6%, 92.0%, and 69.6%.

ACS Style

Guofeng Tong; Yong Li; Dong Chen; Qi Sun; Wei Cao; Guiqiu Xiang. CSPC-Dataset: New LiDAR Point Cloud Dataset and Benchmark for Large-Scale Scene Semantic Segmentation. IEEE Access 2020, 8, 87695 -87718.

AMA Style

Guofeng Tong, Yong Li, Dong Chen, Qi Sun, Wei Cao, Guiqiu Xiang. CSPC-Dataset: New LiDAR Point Cloud Dataset and Benchmark for Large-Scale Scene Semantic Segmentation. IEEE Access. 2020; 8 (99):87695-87718.

Chicago/Turabian Style

Guofeng Tong; Yong Li; Dong Chen; Qi Sun; Wei Cao; Guiqiu Xiang. 2020. "CSPC-Dataset: New LiDAR Point Cloud Dataset and Benchmark for Large-Scale Scene Semantic Segmentation." IEEE Access 8, no. 99: 87695-87718.

Journal article
Published: 01 January 2020 in Remote Sensing
Reads 0
Downloads 0

In outdoor Light Detection and Ranging (lidar)point cloud classification, finding the discriminative features for point cloud perception and scene understanding represents one of the great challenges. The features derived from defect-laden (i.e., noise, outliers, occlusions and irregularities) and raw outdoor LiDAR scans usually contain redundant and irrelevant information which adversely affects the accuracy of point semantic labeling. Moreover, point cloud features of different views have a capability to express different attributes of the same point. The simplest way of concatenating these features of different views cannot guarantee the applicability and effectiveness of the fused features. To solve these problems and achieve outdoor point cloud classification with fewer training samples, we propose a novel multi-view features and classifiers’ joint learning framework. The proposed framework uses label consistency and local distribution consistency of multi-space constraints for multi-view point cloud features extraction and classification. In the framework, the manifold learning is used to carry out subspace joint learning of multi-view features by introducing three kinds of constraints, i.e., local distribution consistency of feature space and position space, label consistency among multi-view predicted labels and ground truth, and label consistency among multi-view predicted labels. The proposed model can be well trained by fewer training points, and an iterative algorithm is used to solve the joint optimization of multi-view feature projection matrices and linear classifiers. Subsequently, the multi-view features are fused and used for point cloud classification effectively. We evaluate the proposed method on five different point cloud scenes and experimental results demonstrate that the classification performance of the proposed method is at par or outperforms the compared algorithms.

ACS Style

Guofeng Tong; Yong Li; Dong Chen; Shaobo Xia; Jiju Peethambaran; Yuebin Wang. Multi-View Features Joint Learning with Label and Local Distribution Consistency for Point Cloud Classification. Remote Sensing 2020, 12, 135 .

AMA Style

Guofeng Tong, Yong Li, Dong Chen, Shaobo Xia, Jiju Peethambaran, Yuebin Wang. Multi-View Features Joint Learning with Label and Local Distribution Consistency for Point Cloud Classification. Remote Sensing. 2020; 12 (1):135.

Chicago/Turabian Style

Guofeng Tong; Yong Li; Dong Chen; Shaobo Xia; Jiju Peethambaran; Yuebin Wang. 2020. "Multi-View Features Joint Learning with Label and Local Distribution Consistency for Point Cloud Classification." Remote Sensing 12, no. 1: 135.

Journal article
Published: 29 November 2019 in Remote Sensing
Reads 0
Downloads 0

Accurate and effective classification of lidar point clouds with discriminative features expression is a challenging task for scene understanding. In order to improve the accuracy and the robustness of point cloud classification based on single point features, we propose a novel point set multi-level aggregation features extraction and fusion method based on multi-scale max pooling and latent Dirichlet allocation (LDA). To this end, in the hierarchical point set feature extraction, point sets of different levels and sizes are first adaptively generated through multi-level clustering. Then, more effective sparse representation is implemented by locality-constrained linear coding (LLC) based on single point features, which contributes to the extraction of discriminative individual point set features. Next, the local point set features are extracted by combining the max pooling method and the multi-scale pyramid structure constructed by the point’s coordinates within each point set. The global and the local features of the point sets are effectively expressed by the fusion of multi-scale max pooling features and global features constructed by the point set LLC-LDA model. The point clouds are classified by using the point set multi-level aggregation features. Our experiments on two scenes of airborne laser scanning (ALS) point clouds—a mobile laser scanning (MLS) scene point cloud and a terrestrial laser scanning (TLS) scene point cloud—demonstrate the effectiveness of the proposed point set multi-level aggregation features for point cloud classification, and the proposed method outperforms other related and compared algorithms.

ACS Style

Guofeng Tong; Yong Li; Weilong Zhang; Dong Chen; Jingchao Yang. Point Set Multi-Level Aggregation Feature Extraction Based on Multi-Scale Max Pooling and LDA for Point Cloud Classification. Remote Sensing 2019, 11, 2846 .

AMA Style

Guofeng Tong, Yong Li, Weilong Zhang, Dong Chen, Jingchao Yang. Point Set Multi-Level Aggregation Feature Extraction Based on Multi-Scale Max Pooling and LDA for Point Cloud Classification. Remote Sensing. 2019; 11 (23):2846.

Chicago/Turabian Style

Guofeng Tong; Yong Li; Weilong Zhang; Dong Chen; Jingchao Yang. 2019. "Point Set Multi-Level Aggregation Feature Extraction Based on Multi-Scale Max Pooling and LDA for Point Cloud Classification." Remote Sensing 11, no. 23: 2846.

Articles
Published: 28 August 2019 in Remote Sensing Letters
Reads 0
Downloads 0

Modelling accurate ground surfaces in urban areas is important for surveying and mapping. Breaklines along roads are critical for both digital elevation models and high precision maps. This article presents a new breakline-preserving ground interpolation method for point clouds acquired by mobile laser scanning (MLS). The proposed method needs only point coordinates as input. It first initializes unknown regions by the nearest neighbouring interpolation then matched patches in known regions are found along edges. Next, Poisson interpolation is utilized to improve the elevation accuracy. An edge-guided patch regularization method is applied before patch blending to remove noise and improve gradient accuracy. The proposed method was tested on two datasets acquired by different MLS systems. This paper quantitatively evaluates the proposed method as well as the existing solution in two samples of complete points. These results demonstrate that the proposed method outperforms the existing method in terms of visual coherence and mean absolute differences. Moreover, the edge information which is self-contained in point clouds has been proven useful in ground interpolation.

ACS Style

Shaobo Xia; Dong Chen; Ruisheng Wang. A breakline-preserving ground interpolation method for MLS data. Remote Sensing Letters 2019, 10, 1201 -1210.

AMA Style

Shaobo Xia, Dong Chen, Ruisheng Wang. A breakline-preserving ground interpolation method for MLS data. Remote Sensing Letters. 2019; 10 (12):1201-1210.

Chicago/Turabian Style

Shaobo Xia; Dong Chen; Ruisheng Wang. 2019. "A breakline-preserving ground interpolation method for MLS data." Remote Sensing Letters 10, no. 12: 1201-1210.

Journal article
Published: 19 July 2019 in Applied Sciences
Reads 0
Downloads 0

The automatic modeling of as-built building interiors, known as indoor building reconstruction, is gaining increasing attention because of its widespread applications. With the development of sensors to acquire high-quality point clouds, a new modeling scheme called scan-to-BIM (building information modeling) emerged as well. However, the traditional scan-to-BIM process is time-tedious and labor-intensive. Most existing automatic indoor building reconstruction solutions can only fit the specific data or lack of detailed model representation. In this paper, we propose a layer-wise method, on the basis of 3D planar primitives, to create 2D floor plans and 3D building models. It can deal with different types of point clouds and retain many structural details with respect to protruding structures, complicated ceilings, and fine corners. The experimental results indicate the effectiveness of the proposed method and the robustness against noises and sparse data.

ACS Style

Lei Xie; Ruisheng Wang; Zutao Ming; Dong Chen. A Layer-Wise Strategy for Indoor As-Built Modeling Using Point Clouds. Applied Sciences 2019, 9, 2904 .

AMA Style

Lei Xie, Ruisheng Wang, Zutao Ming, Dong Chen. A Layer-Wise Strategy for Indoor As-Built Modeling Using Point Clouds. Applied Sciences. 2019; 9 (14):2904.

Chicago/Turabian Style

Lei Xie; Ruisheng Wang; Zutao Ming; Dong Chen. 2019. "A Layer-Wise Strategy for Indoor As-Built Modeling Using Point Clouds." Applied Sciences 9, no. 14: 2904.

Journal article
Published: 27 May 2019 in Remote Sensing
Reads 0
Downloads 0

This paper presents a novel framework to achieve 3D semantic labeling of objects (e.g., trees, buildings, and vehicles) from airborne laser-scanning point clouds. To this end, we propose a framework which consists of hierarchical clustering and higher-order conditional random fields (CRF) labeling. In the hierarchical clustering, the raw point clouds are over-segmented into a set of fine-grained clusters by integrating the point density clustering and the classic K-means clustering algorithm, followed by the proposed probability density clustering algorithm. Through this process, we not only obtain a more uniform size and more homogeneous clusters with semantic consistency, but the topological relationships of the cluster’s neighborhood are implicitly maintained by turning the problem of topology maintenance into a clustering problem based on the proposed probability density clustering algorithm. Subsequently, the fine-grained clusters and their topological context are fed into the CRF labeling step, from which the fine-grained cluster’s semantic labels are learned and determined by solving a multi-label energy minimization formulation, which simultaneously considers the unary, pairwise, and higher-order potentials. Our experiments of classifying urban and residential scenes demonstrate that the proposed approach reaches 88.5% and 86.1% of “m F 1 ” estimated by averaging all classes of the F 1 -scores. We prove that the proposed method outperforms five other state-of-the-art methods. In addition, we demonstrate the effectiveness of the proposed energy terms by using an “ablation study” strategy.

ACS Style

Yong Li; Dong Chen; Xiance Du; Shaobo Xia; Yuliang Wang; Sheng Xu; Qiang Yang. Higher-Order Conditional Random Fields-Based 3D Semantic Labeling of Airborne Laser-Scanning Point Clouds. Remote Sensing 2019, 11, 1248 .

AMA Style

Yong Li, Dong Chen, Xiance Du, Shaobo Xia, Yuliang Wang, Sheng Xu, Qiang Yang. Higher-Order Conditional Random Fields-Based 3D Semantic Labeling of Airborne Laser-Scanning Point Clouds. Remote Sensing. 2019; 11 (10):1248.

Chicago/Turabian Style

Yong Li; Dong Chen; Xiance Du; Shaobo Xia; Yuliang Wang; Sheng Xu; Qiang Yang. 2019. "Higher-Order Conditional Random Fields-Based 3D Semantic Labeling of Airborne Laser-Scanning Point Clouds." Remote Sensing 11, no. 10: 1248.

Journal article
Published: 02 May 2019 in IEEE Geoscience and Remote Sensing Letters
Reads 0
Downloads 0

Due to errors in sensors and positioning, there exist mismatches between different phases of mobile laser scanning point clouds, which impede the application of point cloud, such as changing detection and deformation monitoring. To rectify such mismatches, we designed a 3-D deep feature construction method for point cloud registration. The proposed method combines two 3-D convolutional neural networks into a uniform deep learning model to extract 3-D deep features. First, the corresponding points and noncorresponding points are set to train the deep learning model to minimize the distance between corresponding points' features and maximize the distance between features of noncorresponding points. Second, in the test phase, the 3-D deep feature for each keypoint was extracted by the trained deep learning model. This could be used to determine the corresponding points by the k-dimensional tree and random sample consensus (RANSAC) algorithm. Finally, a transformation matrix was calculated based on the corresponding points and was then applied to point cloud registration. The experimental results illustrated that the proposed method of using 3-D deep features is more efficient at a corresponding point search than representatives of three existing methods. It also improved registration accuracy.

ACS Style

Zhenxin Zhang; Lan Sun; Ruofei Zhong; Dong Chen; Zhihua Xu; Cheng Wang; Cheng-Zhi Qin; Haili Sun; Roujing Li. 3-D Deep Feature Construction for Mobile Laser Scanning Point Cloud Registration. IEEE Geoscience and Remote Sensing Letters 2019, 16, 1904 -1908.

AMA Style

Zhenxin Zhang, Lan Sun, Ruofei Zhong, Dong Chen, Zhihua Xu, Cheng Wang, Cheng-Zhi Qin, Haili Sun, Roujing Li. 3-D Deep Feature Construction for Mobile Laser Scanning Point Cloud Registration. IEEE Geoscience and Remote Sensing Letters. 2019; 16 (12):1904-1908.

Chicago/Turabian Style

Zhenxin Zhang; Lan Sun; Ruofei Zhong; Dong Chen; Zhihua Xu; Cheng Wang; Cheng-Zhi Qin; Haili Sun; Roujing Li. 2019. "3-D Deep Feature Construction for Mobile Laser Scanning Point Cloud Registration." IEEE Geoscience and Remote Sensing Letters 16, no. 12: 1904-1908.

Journal article
Published: 09 February 2019 in Remote Sensing
Reads 0
Downloads 0

Airborne laser scanning (ALS) point cloud classification is a challenge due to factors including complex scene structure, various densities, surface morphology, and the number of ground objects. A point cloud classification method is presented in this paper, based on content-sensitive multilevel objects (point clusters) in consideration of the density distribution of ground objects. The space projection method is first used to convert the three-dimensional point cloud into a two-dimensional (2D) image. The image is then mapped to the 2D manifold space, and restricted centroidal Voronoi tessellation is built for initial segmentation of content-sensitive point clusters. Thus, the segmentation results take the entity content (density distribution) into account, and the initial classification unit is adapted to the density of ground objects. The normalized cut is then used to segment the initial point clusters to construct content-sensitive multilevel point clusters. Following this, the point-based hierarchical features of each point cluster are extracted, and the multilevel point-cluster feature is constructed by sparse coding and latent Dirichlet allocation models. Finally, the hierarchical classification framework is created based on multilevel point-cluster features, and the AdaBoost classifiers in each level are trained. The recognition results of different levels are combined to effectively improve the classification accuracy of the ALS point cloud in the test process. Two scenes are used to experimentally test the method, and it is compared with three other state-of-the-art techniques.

ACS Style

Zongxia Xu; Zhenxin Zhang; Ruofei Zhong; Dong Chen; Taochun Sun; Xin Deng; Zhen Li; Cheng-Zhi Qin. Content-Sensitive Multilevel Point Cluster Construction for ALS Point Cloud Classification. Remote Sensing 2019, 11, 342 .

AMA Style

Zongxia Xu, Zhenxin Zhang, Ruofei Zhong, Dong Chen, Taochun Sun, Xin Deng, Zhen Li, Cheng-Zhi Qin. Content-Sensitive Multilevel Point Cluster Construction for ALS Point Cloud Classification. Remote Sensing. 2019; 11 (3):342.

Chicago/Turabian Style

Zongxia Xu; Zhenxin Zhang; Ruofei Zhong; Dong Chen; Taochun Sun; Xin Deng; Zhen Li; Cheng-Zhi Qin. 2019. "Content-Sensitive Multilevel Point Cluster Construction for ALS Point Cloud Classification." Remote Sensing 11, no. 3: 342.

Journal article
Published: 01 February 2019 in Remote Sensing
Reads 0
Downloads 0

This paper presents a novel framework to extract metro tunnel cross sections (profiles) from Terrestrial Laser Scanning point clouds. The entire framework consists of two steps: tunnel central axis extraction and cross section determination. In tunnel central extraction, we propose a slice-based method to obtain an initial central axis, which is further divided into linear and nonlinear circular segments by an enhanced Random Sample Consensus (RANSAC) tunnel axis segmentation algorithm. This algorithm transforms the problem of hybrid linear and nonlinear segment extraction into a sole segmentation of linear elements defined at the tangent space rather than raw data space, significantly simplifying the tunnel axis segmentation. The extracted axis segments are then provided as input to the step of the cross section determination which generates the coarse cross-sectional points by intersecting a series of straight lines that rotate orthogonally around the tunnel axis with their local fitted quadric surface, i.e., cylindrical surface. These generated profile points are further refined and densified via solving a constrained nonlinear least squares problem. Our experiments on Nanjing metro tunnel show that the cross sectional fitting error is only 1.69 mm. Compared with the designed radius of the metro tunnel, the RMSE (Root Mean Square Error) of extracted cross sections’ radii only keeps 1.60 mm. We also test our algorithm on another metro tunnel in Shanghai, and the results show that the RMSE of radii only keeps 4.60 mm which is superior to a state-of-the-art method of 6.00 mm. Apart from the accurate geometry, our approach can maintain the correct topology among cross sections, thereby guaranteeing the production of geometric tunnel model without crack defects. Moreover, we prove that our algorithm is insensitive to the missing data and point density.

ACS Style

Zhen Cao; Dong Chen; Yufeng Shi; Zhenxin Zhang; Fengxiang Jin; Ting Yun; Sheng Xu; Zhizhong Kang; Liqiang Zhang. A Flexible Architecture for Extracting Metro Tunnel Cross Sections from Terrestrial Laser Scanning Point Clouds. Remote Sensing 2019, 11, 297 .

AMA Style

Zhen Cao, Dong Chen, Yufeng Shi, Zhenxin Zhang, Fengxiang Jin, Ting Yun, Sheng Xu, Zhizhong Kang, Liqiang Zhang. A Flexible Architecture for Extracting Metro Tunnel Cross Sections from Terrestrial Laser Scanning Point Clouds. Remote Sensing. 2019; 11 (3):297.

Chicago/Turabian Style

Zhen Cao; Dong Chen; Yufeng Shi; Zhenxin Zhang; Fengxiang Jin; Ting Yun; Sheng Xu; Zhizhong Kang; Liqiang Zhang. 2019. "A Flexible Architecture for Extracting Metro Tunnel Cross Sections from Terrestrial Laser Scanning Point Clouds." Remote Sensing 11, no. 3: 297.

Journal article
Published: 02 November 2018 in Remote Sensing
Reads 0
Downloads 0

Accurate acquisition of forest structural parameters, which is essential for the parameterization of forest growth models and understanding forest ecosystems, is also crucial for forest inventories and sustainable forest management. In this study, simultaneously acquired airborne full-waveform (FWF) LiDAR and hyperspectral data were used to predict forest structural parameters in subtropical forests of southeast China. The pulse amplitude and waveform shape of airborne FWF LiDAR data were calibrated using a physical process-driven and a voxel-based approach, respectively. Different suites of FWF LiDAR and hyperspectral metrics, i.e., point cloud (derived from LiDAR-waveforms) metrics (DPC), full-waveform (geometric and radiometric features) metrics (FW) and hyperspectral (original reflectance bands, vegetation indices and statistical indices) metrics (HS), were extracted and assessed using correlation analysis and principal component analysis (PCA). The selected metrics of DPC, FW and HS were used to fit regression models individually and in combination to predict diameter at breast height (DBH), Lorey’s mean height (HL), stem number (N), basal area (G), volume (V) and above ground biomass (AGB), and the capability of the predictive models and synergetic effects of metrics were assessed using leave-one-out cross validation. The results showed that: among the metrics selected from three groups divided by the PCA analysis, twelve DPC, eight FW and ten HS were highly correlated with the first and second principal component (r > 0.7); most of the metrics selected from DPC, FW and HS had weak relationships between each other (r < 0.7); the prediction of HL had a relatively higher accuracy (Adjusted-R2 = 0.88, relative RMSE = 10.68%), followed by the prediction of AGB (Adjusted-R2 = 0.84, relative RMSE = 15.14%), and the prediction of V had a relatively lower accuracy (Adjusted-R2 = 0.81, relative RMSE = 16.37%); and the models including only DPC had the capability to predict forest structural parameters with relatively high accuracies (Adjusted-R2 = 0.52–0.81, relative RMSE = 15.70–40.87%) whereas the usage of DPC and FW resulted in higher accuracies (Adjusted-R2 = 0.62–0.87, relative RMSE = 11.01–31.30%). Moreover, the integration of DPC, FW and HS can further improve the accuracies of forest structural parameters prediction (Adjusted-R2 = 0.68–0.88, relative RMSE = 10.68–28.67%).

ACS Style

Xin Shen; Lin Cao; Dong Chen; Yuan Sun; Guibin Wang; Honghua Ruan. Prediction of Forest Structural Parameters Using Airborne Full-Waveform LiDAR and Hyperspectral Data in Subtropical Forests. Remote Sensing 2018, 10, 1729 .

AMA Style

Xin Shen, Lin Cao, Dong Chen, Yuan Sun, Guibin Wang, Honghua Ruan. Prediction of Forest Structural Parameters Using Airborne Full-Waveform LiDAR and Hyperspectral Data in Subtropical Forests. Remote Sensing. 2018; 10 (11):1729.

Chicago/Turabian Style

Xin Shen; Lin Cao; Dong Chen; Yuan Sun; Guibin Wang; Honghua Ruan. 2018. "Prediction of Forest Structural Parameters Using Airborne Full-Waveform LiDAR and Hyperspectral Data in Subtropical Forests." Remote Sensing 10, no. 11: 1729.

Journal article
Published: 10 September 2018 in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Reads 0
Downloads 0

In this paper, we propose a hierarchical building detection framework based on deep learning model, which focuses on accurately detecting buildings from remote sensing images. To this end, we first construct the generation model of the multilevel training samples using the Gaussian pyramid technique to learn the features of building objects at different scales and spatial resolutions. Then, the building region proposal networks are put forward to quickly extract candidate building regions, thereby increasing the efficiency of the building object detection. Based on the candidate building regions, we establish the multilevel building detection model using the convolutional neural networks (CNNs), from which the generic image features of each building region proposal are calculated. Finally, the obtained features are provided as inputs for training CNNs model, and the learned model is further applied to test images for the detection of unknown buildings. Various experiments using the Datasets I and II (in Section V-A) show that the proposed framework increases the mean average precision values of building detection by 3.63%, 3.85%, and 3.77%, compared with the state-of-the-art methods, i.e., Method IV. Besides, the proposed method is robust to the buildings having different spatial textures and types.

ACS Style

Yibo Liu; Zhenxin Zhang; Ruofei Zhong; Dong Chen; Yinghai Ke; Jiju Peethambaran; Chuqun Chen; Lan Sun. Multilevel Building Detection Framework in Remote Sensing Images Based on Convolutional Neural Networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2018, 11, 3688 -3700.

AMA Style

Yibo Liu, Zhenxin Zhang, Ruofei Zhong, Dong Chen, Yinghai Ke, Jiju Peethambaran, Chuqun Chen, Lan Sun. Multilevel Building Detection Framework in Remote Sensing Images Based on Convolutional Neural Networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2018; 11 (10):3688-3700.

Chicago/Turabian Style

Yibo Liu; Zhenxin Zhang; Ruofei Zhong; Dong Chen; Yinghai Ke; Jiju Peethambaran; Chuqun Chen; Lan Sun. 2018. "Multilevel Building Detection Framework in Remote Sensing Images Based on Convolutional Neural Networks." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11, no. 10: 3688-3700.