This page has only limited features, please log in for full access.

Prof. Xiangyun Hu
School of Remote Sensing and Information Engineering, Wuhan University, 129 Luoyu Road, Wuhan, Hubei province, 430079, China

Basic Info

Basic Info is private.

Research Keywords & Expertise

0 Computer Vision
0 Feature Extraction
0 Machine Learning
0 Pattern Recognition
0 Lidar data processing

Fingerprints

Feature Extraction
Computer Vision
Machine Learning

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 19 May 2021 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0

Automatic change detection from remotely sensed imagery is extremely important for many applications, including land use mapping. In recent years, a growing number of researchers have applied capable deep-learning methods to the research on change detection. The majority of deep learning-based change detection methods currently perform pixel-by-pixel classification at the original image scale, but they can hardly avoid the false changes caused by strong parallax effects and projected shadows, without considering the totality of changed objects/regions. In this study, we propose an object-level change detection framework to detect changed geographic entities (such as newly built buildings or changed artificial structures) by paying more attention to the overall characteristics and context association of changed object instances. The detected changed objects are represented as bounding boxes, which are simple, regular, and convenient to use in object feature extraction. In terms of data handling, a special data augmentation method for change detection called Alternative-Mosaic is proposed to effectively accelerate model training and improve model performance. For the model, we propose a one-stage change detection network called dual correlation attention-guided detector (DCA-Det) to detect the changed objects. In particular, we feed the dual-temporal images into a weight-shared backbone network to extract the change features of different scales. The change features on the same scale are further refined, and then the features between different scales are fused by the correlation attention-guided feature fusion neck. Finally, the change detection heads output the prediction results of the changed objects/regions of different scales. Experiments were conducted on public LEVIR building change detection and aerial imagery change detection (AICD) datasets. The quantitative evaluation and visualization results proved the superiority and robustness of our framework. Our DCA-Det can obtain state-of-the-art performance on object-level metrics (99.50% APIoU=.50 and 79.72% APIoU=.50:.05:.95) on the AICD-2012 dataset.

ACS Style

Lin Zhang; Xiangyun Hu; Mi Zhang; Zhen Shu; Hao Zhou. Object-level change detection with a dual correlation attention-guided detector. ISPRS Journal of Photogrammetry and Remote Sensing 2021, 177, 147 -160.

AMA Style

Lin Zhang, Xiangyun Hu, Mi Zhang, Zhen Shu, Hao Zhou. Object-level change detection with a dual correlation attention-guided detector. ISPRS Journal of Photogrammetry and Remote Sensing. 2021; 177 ():147-160.

Chicago/Turabian Style

Lin Zhang; Xiangyun Hu; Mi Zhang; Zhen Shu; Hao Zhou. 2021. "Object-level change detection with a dual correlation attention-guided detector." ISPRS Journal of Photogrammetry and Remote Sensing 177, no. : 147-160.

Research article
Published: 10 March 2021 in Remote Sensing Letters
Reads 0
Downloads 0

Automatic change detection is an important and difficult task in the field of remote sensing. In this study, a deep Siamese convolutional network based on the fusion of high- and low-level features is proposed for change detection in remote sensing images. Given that low-level features correspond to low-order ones (e.g., texture) that are sensitive to change and that high-level features can accurately reflect image category information (e.g., semantic information), we fuse these features to enhance the abstractness and robustness of the extracted features in the change detection framework. The whole system is end-to-end and does not require any pre- or post-processing. Experimental results on three datasets show that our method is superior to other advanced methods by adding a high- and low-level fusion framework.

ACS Style

Hao Zhou; Mi Zhang; Xiangyun Hu; Kun Li; Jing Sun. A Siamese convolutional neural network with high–low level feature fusion for change detection in remotely sensed images. Remote Sensing Letters 2021, 12, 387 -396.

AMA Style

Hao Zhou, Mi Zhang, Xiangyun Hu, Kun Li, Jing Sun. A Siamese convolutional neural network with high–low level feature fusion for change detection in remotely sensed images. Remote Sensing Letters. 2021; 12 (4):387-396.

Chicago/Turabian Style

Hao Zhou; Mi Zhang; Xiangyun Hu; Kun Li; Jing Sun. 2021. "A Siamese convolutional neural network with high–low level feature fusion for change detection in remotely sensed images." Remote Sensing Letters 12, no. 4: 387-396.

Journal article
Published: 05 January 2021 in IEEE Geoscience and Remote Sensing Letters
Reads 0
Downloads 0

Terrain scene clustering is a class of unsupervised methods for choosing suitable algorithms or parameters for airborne laser scanning (ALS) point cloud processing. Most existing point cloud clustering methods use hand-crafted features, such as viewpoint feature histogram (VFH), as the input of clustering algorithms. However, few studies on point cloud processing focused on terrain scene clustering via an unsupervised deep neural network. In the present study, we create a data set for terrain scene clustering in ALS point clouds. We also propose DPCC-Net, a deep point cloud clustering network via unsupervised deep learning that jointly learns the parameters of the network and the cluster task of extracted features. DPCC-Net iteratively groups the features extracted by the deep convolution neural network with the k-means algorithm and uses the clustering result as the pseudo label to update the parameters of the network. We apply the proposed DPCC-Net to unsupervised training on a large terrain scene data set. The clustering result of DPCC-Net outperforms those of other typical methods.

ACS Style

Jinming Zhang; Xiangyun Hu; Hengming Dai. Unsupervised Learning of ALS Point Clouds for 3-D Terrain Scene Clustering. IEEE Geoscience and Remote Sensing Letters 2021, PP, 1 -5.

AMA Style

Jinming Zhang, Xiangyun Hu, Hengming Dai. Unsupervised Learning of ALS Point Clouds for 3-D Terrain Scene Clustering. IEEE Geoscience and Remote Sensing Letters. 2021; PP (99):1-5.

Chicago/Turabian Style

Jinming Zhang; Xiangyun Hu; Hengming Dai. 2021. "Unsupervised Learning of ALS Point Clouds for 3-D Terrain Scene Clustering." IEEE Geoscience and Remote Sensing Letters PP, no. 99: 1-5.

Journal article
Published: 01 March 2020 in Remote Sensing
Reads 0
Downloads 0

Automatic extraction of region objects from high-resolution satellite imagery presents a great challenge, because there may be very large variations of the objects in terms of their size, texture, shape, and contextual complexity in the image. To handle these issues, we present a novel, deep-learning-based approach to interactively extract non-artificial region objects, such as water bodies, woodland, farmland, etc., from high-resolution satellite imagery. First, our algorithm transforms user-provided positive and negative clicks or scribbles into guidance maps, which consist of a relevance map modified from Euclidean distance maps, two geodesic distance maps (for positive and negative, respectively), and a sampling map. Then, feature maps are extracted by applying a VGG convolutional neural network pre-trained on the ImageNet dataset to the image X, and they are then upsampled to the resolution of X. Image X, guidance maps, and feature maps are integrated as the input tensor. We feed the proposed attention-guided, multi-scale segmentation neural network (AGMSSeg-Net) with the input tensor above to obtain the mask that assigns a binary label to each pixel. After a post-processing operation based on a fully connected Conditional Random Field (CRF), we extract the selected object boundary from the segmentation result. Experiments were conducted on two typical datasets with diverse region object types from complex scenes. The results demonstrate the effectiveness of the proposed method, and our approach outperforms existing methods for interactive image segmentation.

ACS Style

Kun Li; Xiangyun Hu; Huiwei Jiang; Zhen Shu; Mi Zhang. Attention-Guided Multi-Scale Segmentation Neural Network for Interactive Extraction of Region Objects from High-Resolution Satellite Imagery. Remote Sensing 2020, 12, 789 .

AMA Style

Kun Li, Xiangyun Hu, Huiwei Jiang, Zhen Shu, Mi Zhang. Attention-Guided Multi-Scale Segmentation Neural Network for Interactive Extraction of Region Objects from High-Resolution Satellite Imagery. Remote Sensing. 2020; 12 (5):789.

Chicago/Turabian Style

Kun Li; Xiangyun Hu; Huiwei Jiang; Zhen Shu; Mi Zhang. 2020. "Attention-Guided Multi-Scale Segmentation Neural Network for Interactive Extraction of Region Objects from High-Resolution Satellite Imagery." Remote Sensing 12, no. 5: 789.

Editorial
Published: 07 February 2020 in Remote Sensing
Reads 0
Downloads 0

Building extraction from remote sensing data plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications

ACS Style

Mohammad Awrangjeb; Xiangyun Hu; Bisheng Yang; Jiaojiao Tian. Editorial for Special Issue: “Remote Sensing based Building Extraction”. Remote Sensing 2020, 12, 549 .

AMA Style

Mohammad Awrangjeb, Xiangyun Hu, Bisheng Yang, Jiaojiao Tian. Editorial for Special Issue: “Remote Sensing based Building Extraction”. Remote Sensing. 2020; 12 (3):549.

Chicago/Turabian Style

Mohammad Awrangjeb; Xiangyun Hu; Bisheng Yang; Jiaojiao Tian. 2020. "Editorial for Special Issue: “Remote Sensing based Building Extraction”." Remote Sensing 12, no. 3: 549.

Journal article
Published: 03 February 2020 in Remote Sensing
Reads 0
Downloads 0

In recent years, building change detection has made remarkable progress through using deep learning. The core problems of this technique are the need for additional data (e.g., Lidar or semantic labels) and the difficulty in extracting sufficient features. In this paper, we propose an end-to-end network, called the pyramid feature-based attention-guided Siamese network (PGA-SiamNet), to solve these problems. The network is trained to capture possible changes using a convolutional neural network in a pyramid. It emphasizes the importance of correlation among the input feature pairs by introducing a global co-attention mechanism. Furthermore, we effectively improved the long-range dependencies of the features by utilizing various attention mechanisms and then aggregating the features of the low-level and co-attention level; this helps to obtain richer object information. Finally, we evaluated our method with a publicly available dataset (WHU) building dataset and a new dataset (EV-CD) building dataset. The experiments demonstrate that the proposed method is effective for building change detection and outperforms the existing state-of-the-art methods on high-resolution remote sensing orthoimages in various metrics.

ACS Style

Huiwei Jiang; Xiangyun Hu; Kun Li; Jinming Zhang; Jinqi Gong; Mi Zhang. PGA-SiamNet: Pyramid Feature-Based Attention-Guided Siamese Network for Remote Sensing Orthoimagery Building Change Detection. Remote Sensing 2020, 12, 484 .

AMA Style

Huiwei Jiang, Xiangyun Hu, Kun Li, Jinming Zhang, Jinqi Gong, Mi Zhang. PGA-SiamNet: Pyramid Feature-Based Attention-Guided Siamese Network for Remote Sensing Orthoimagery Building Change Detection. Remote Sensing. 2020; 12 (3):484.

Chicago/Turabian Style

Huiwei Jiang; Xiangyun Hu; Kun Li; Jinming Zhang; Jinqi Gong; Mi Zhang. 2020. "PGA-SiamNet: Pyramid Feature-Based Attention-Guided Siamese Network for Remote Sensing Orthoimagery Building Change Detection." Remote Sensing 12, no. 3: 484.

Journal article
Published: 03 January 2020 in Remote Sensing
Reads 0
Downloads 0

It is difficult to extract a digital elevation model (DEM) from an airborne laser scanning (ALS) point cloud in a forest area because of the irregular and uneven distribution of ground and vegetation points. Machine learning, especially deep learning methods, has shown powerful feature extraction in accomplishing point cloud classification. However, most of the existing deep learning frameworks, such as PointNet, dynamic graph convolutional neural network (DGCNN), and SparseConvNet, cannot consider the particularity of ALS point clouds. For large-scene laser point clouds, the current data preprocessing methods are mostly based on random sampling, which is not suitable for DEM extraction tasks. In this study, we propose a novel data sampling algorithm for the data preparation of patch-based training and classification named T-Sampling. T-Sampling uses the set of the lowest points in a certain area as basic points with other points added to supplement it, which can guarantee the integrity of the terrain in the sampling area. In the learning part, we propose a new convolution model based on terrain named Tin-EdgeConv that fully considers the spatial relationship between ground and non-ground points when constructing a directed graph. We design a new network based on Tin-EdgeConv to extract local features and use PointNet architecture to extract global context information. Finally, we combine this information effectively with a designed attention fusion module. These aspects are important in achieving high classification accuracy. We evaluate the proposed method by using large-scale data from forest areas. Results show that our method is more accurate than existing algorithms.

ACS Style

Jinming Zhang; Xiangyun Hu; Hengming Dai; ShenRun Qu. DEM Extraction from ALS Point Clouds in Forest Areas via Graph Convolution Network. Remote Sensing 2020, 12, 178 .

AMA Style

Jinming Zhang, Xiangyun Hu, Hengming Dai, ShenRun Qu. DEM Extraction from ALS Point Clouds in Forest Areas via Graph Convolution Network. Remote Sensing. 2020; 12 (1):178.

Chicago/Turabian Style

Jinming Zhang; Xiangyun Hu; Hengming Dai; ShenRun Qu. 2020. "DEM Extraction from ALS Point Clouds in Forest Areas via Graph Convolution Network." Remote Sensing 12, no. 1: 178.

Journal article
Published: 31 March 2019 in Sensors
Reads 0
Downloads 0

The identification and monitoring of buildings from remotely sensed imagery are of considerable value for urbanization monitoring. Two outstanding issues in the detection of changes in buildings with composite structures and relief displacements are heterogeneous appearances and positional inconsistencies. In this paper, a novel patch-based matching approach is developed using densely connected conditional random field (CRF) optimization to detect building changes from bi-temporal aerial images. First, the bi-temporal aerial images are combined to obtain change information using an object-oriented technique, and then semantic segmentation based on a deep convolutional neural network is used to extract building areas. With the change information and extracted buildings, a graph-cuts-based segmentation algorithm is applied to generate the bi-temporal changed building proposals. Next, in the bi-temporal changed building proposals, corner and edge information are integrated for feature detection through a phase congruency (PC) model, and the structural feature descriptor, called the histogram of orientated PC, is used to perform patch-based roof matching. We determined the final change in buildings by gathering matched roof and bi-temporal changed building proposals using co-refinement based on CRF, which were further classified as “newly built,” “demolished”, or “changed”. Experiments were conducted with two typical datasets covering complex urban scenes with diverse building types. The results confirm the effectiveness and generality of the proposed algorithm, with more than 85% and 90% in overall accuracy and completeness, respectively.

ACS Style

Jinqi Gong; Xiangyun Hu; Shiyan Pang; Kun Li. Patch Matching and Dense CRF-Based Co-Refinement for Building Change Detection from Bi-Temporal Aerial Images. Sensors 2019, 19, 1557 .

AMA Style

Jinqi Gong, Xiangyun Hu, Shiyan Pang, Kun Li. Patch Matching and Dense CRF-Based Co-Refinement for Building Change Detection from Bi-Temporal Aerial Images. Sensors. 2019; 19 (7):1557.

Chicago/Turabian Style

Jinqi Gong; Xiangyun Hu; Shiyan Pang; Kun Li. 2019. "Patch Matching and Dense CRF-Based Co-Refinement for Building Change Detection from Bi-Temporal Aerial Images." Sensors 19, no. 7: 1557.

Journal article
Published: 26 March 2019 in Remote Sensing
Reads 0
Downloads 0

Thanks to the recent development of laser scanner hardware and the technology of dense image matching (DIM), the acquisition of three-dimensional (3D) point cloud data has become increasingly convenient. However, how to effectively combine 3D point cloud data and images to realize accurate building change detection is still a hotspot in the field of photogrammetry and remote sensing. Therefore, with the bi-temporal aerial images and point cloud data obtained by airborne laser scanner (ALS) or DIM as the data source, a novel building change detection method combining co-segmentation and superpixel-based graph cuts is proposed in this paper. In this method, the bi-temporal point cloud data are firstly combined to achieve a co-segmentation to obtain bi-temporal superpixels with the simple linear iterative clustering (SLIC) algorithm. Secondly, for each period of aerial images, semantic segmentation based on a deep convolutional neural network is used to extract building areas, and this is the basis for subsequent superpixel feature extraction. Again, with the bi-temporal superpixel as the processing unit, a graph-cuts-based building change detection algorithm is proposed to extract the changed buildings. In this step, the building change detection problem is modeled as two binary classifications, and acquisition of each period’s changed buildings is a binary classification, in which the changed building is regarded as foreground and the other area as background. Then, the graph cuts algorithm is used to obtain the optimal solution. Next, by combining the bi-temporal changed buildings and digital surface models (DSMs), these changed buildings are further classified as “newly built,” “taller,” “demolished”, and “lower”. Finally, two typical datasets composed of bi-temporal aerial images and point cloud data obtained by ALS or DIM are used to validate the proposed method, and the experiments demonstrate the effectiveness and generality of the proposed algorithm.

ACS Style

Shiyan Pang; Xiangyun Hu; Mi Zhang; Zhongliang Cai; Fengzhu Liu. Co-Segmentation and Superpixel-Based Graph Cuts for Building Change Detection from Bi-Temporal Digital Surface Models and Aerial Images. Remote Sensing 2019, 11, 729 .

AMA Style

Shiyan Pang, Xiangyun Hu, Mi Zhang, Zhongliang Cai, Fengzhu Liu. Co-Segmentation and Superpixel-Based Graph Cuts for Building Change Detection from Bi-Temporal Digital Surface Models and Aerial Images. Remote Sensing. 2019; 11 (6):729.

Chicago/Turabian Style

Shiyan Pang; Xiangyun Hu; Mi Zhang; Zhongliang Cai; Fengzhu Liu. 2019. "Co-Segmentation and Superpixel-Based Graph Cuts for Building Change Detection from Bi-Temporal Digital Surface Models and Aerial Images." Remote Sensing 11, no. 6: 729.

Journal article
Published: 26 December 2018 in ISPRS International Journal of Geo-Information
Reads 0
Downloads 0

Mapping changes in carbon emissions and carbon storage (CECS) with high precision at a small scale (urban street-block level) can improve governmental policy decisions with respect to the construction of low-carbon cities. In this study, a methodological framework for assessing the carbon budget and its spatiotemporal changes from 2015 to 2017 in Wuhan is proposed, which is able to monitor a large area. To estimate the carbon storage, a comprehensive coefficient model was adopted with carbon density factors and corresponding land cover types. Details regarding land cover were extracted from the Geographic National Census Data (GNCD), including forests, grasslands, croplands, and gardens. For the carbon emissions, an emission-factor model was first used and a spatialization operation was subsequently performed using the geographic location that was obtained from the GNCD. The carbon emissions that were identified in the study are from fossil-fuel consumption, industrial production processes, disposal of urban domestic refuse, and transportation. The final dynamic changes in the CECS, in addition to the net carbon emissions, were monitored and analyzed, yielding temporal and spatial maps with a high-precision at a small scale. The results showed that the carbon storage in Wuhan declined by 2.70% over the three years, whereas the carbon emissions initially increased by 0.2%, and subsequently decreased by 3.1% over this period. The trend in the net carbon emission changes was similar to that of the carbon emissions, demonstrating that the efficiency of carbon reduction was improved during this period. Precise spatiotemporal results at the street-block level can offer insights to governments that are engaged in urban carbon cycle decision making processes, improving their capacities to more effectively manage the spatial distribution of CECS.

ACS Style

Yanan Liu; Xiangyun Hu; Hao Wu; Anqi Zhang; Jieting Feng; Jianya Gong. Spatiotemporal Analysis of Carbon Emissions and Carbon Storage Using National Geography Census Data in Wuhan, China. ISPRS International Journal of Geo-Information 2018, 8, 7 .

AMA Style

Yanan Liu, Xiangyun Hu, Hao Wu, Anqi Zhang, Jieting Feng, Jianya Gong. Spatiotemporal Analysis of Carbon Emissions and Carbon Storage Using National Geography Census Data in Wuhan, China. ISPRS International Journal of Geo-Information. 2018; 8 (1):7.

Chicago/Turabian Style

Yanan Liu; Xiangyun Hu; Hao Wu; Anqi Zhang; Jieting Feng; Jianya Gong. 2018. "Spatiotemporal Analysis of Carbon Emissions and Carbon Storage Using National Geography Census Data in Wuhan, China." ISPRS International Journal of Geo-Information 8, no. 1: 7.

Journal article
Published: 14 June 2018 in Remote Sensing
Reads 0
Downloads 0

Carbon sink estimation and ecological assessment of forests require accurate forest type mapping. The traditional survey method is time consuming and labor intensive, and the remote sensing method with high-resolution, multi-spectral commercial satellite images has high cost and low availability. In this study, we explore and evaluate the potential of freely-available multi-source imagery to identify forest types with an object-based random forest algorithm. These datasets included Sentinel-2A (S2), Sentinel-1A (S1) in dual polarization, one-arc-second Shuttle Radar Topographic Mission Digital Elevation (DEM) and multi-temporal Landsat-8 images (L8). We tested seven different sets of explanatory variables for classifying eight forest types in Wuhan, China. The results indicate that single-sensor (S2) or single-day data (L8) cannot obtain satisfactory results; the overall accuracy was 54.31% and 50.00%, respectively. Compared with the classification using only Sentinel-2 data, the overall accuracy increased by approximately 15.23% and 22.51%, respectively, by adding DEM and multi-temporal Landsat-8 imagery. The highest accuracy (82.78%) was achieved with fused imagery, the terrain and multi-temporal data contributing the most to forest type identification. These encouraging results demonstrate that freely-accessible multi-source remotely-sensed data have tremendous potential in forest type identification, which can effectively support monitoring and management of forest ecological resources at regional or global scales.

ACS Style

Yanan Liu; Weishu Gong; Xiangyun Hu; Jianya Gong. Forest Type Identification with Random Forest Using Sentinel-1A, Sentinel-2A, Multi-Temporal Landsat-8 and DEM Data. Remote Sensing 2018, 10, 946 .

AMA Style

Yanan Liu, Weishu Gong, Xiangyun Hu, Jianya Gong. Forest Type Identification with Random Forest Using Sentinel-1A, Sentinel-2A, Multi-Temporal Landsat-8 and DEM Data. Remote Sensing. 2018; 10 (6):946.

Chicago/Turabian Style

Yanan Liu; Weishu Gong; Xiangyun Hu; Jianya Gong. 2018. "Forest Type Identification with Random Forest Using Sentinel-1A, Sentinel-2A, Multi-Temporal Landsat-8 and DEM Data." Remote Sensing 10, no. 6: 946.

Journal article
Published: 24 March 2018 in Sensors
Reads 0
Downloads 0

In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as “newly built”, “taller”, “demolished”, and “lower” by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

ACS Style

Shiyan Pang; Xiangyun Hu; Zhongliang Cai; Jinqi Gong; Mi Zhang. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images. Sensors 2018, 18, 966 .

AMA Style

Shiyan Pang, Xiangyun Hu, Zhongliang Cai, Jinqi Gong, Mi Zhang. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images. Sensors. 2018; 18 (4):966.

Chicago/Turabian Style

Shiyan Pang; Xiangyun Hu; Zhongliang Cai; Jinqi Gong; Mi Zhang. 2018. "Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images." Sensors 18, no. 4: 966.

Journal article
Published: 19 May 2017 in Remote Sensing
Reads 0
Downloads 0

Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.

ACS Style

Mi Zhang; Xiangyun Hu; Like Zhao; Ye Lv; Min Luo; Shiyan Pang. Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images. Remote Sensing 2017, 9, 500 .

AMA Style

Mi Zhang, Xiangyun Hu, Like Zhao, Ye Lv, Min Luo, Shiyan Pang. Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images. Remote Sensing. 2017; 9 (5):500.

Chicago/Turabian Style

Mi Zhang; Xiangyun Hu; Like Zhao; Ye Lv; Min Luo; Shiyan Pang. 2017. "Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images." Remote Sensing 9, no. 5: 500.

Preprint
Published: 11 April 2017
Reads 0
Downloads 0

Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the manifold ranking (MR) model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (\textbf{DMSMR}) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.

ACS Style

Mi Zhang; Xiangyun Hu; Like Zhao; Ye Lv; Min Luo; Shiyan Pang. Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images. 2017, 1 .

AMA Style

Mi Zhang, Xiangyun Hu, Like Zhao, Ye Lv, Min Luo, Shiyan Pang. Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images. . 2017; ():1.

Chicago/Turabian Style

Mi Zhang; Xiangyun Hu; Like Zhao; Ye Lv; Min Luo; Shiyan Pang. 2017. "Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images." , no. : 1.

Journal article
Published: 05 September 2016 in Remote Sensing
Reads 0
Downloads 0

Airborne laser scanning (ALS) point cloud data are suitable for digital terrain model (DTM) extraction given its high accuracy in elevation. Existing filtering algorithms that eliminate non-ground points mostly depend on terrain feature assumptions or representations; these assumptions result in errors when the scene is complex. This paper proposes a new method for ground point extraction based on deep learning using deep convolutional neural networks (CNN). For every point with spatial context, the neighboring points within a window are extracted and transformed into an image. Then, the classification of a point can be treated as the classification of an image; the point-to-image transformation is carefully crafted by considering the height information in the neighborhood area. After being trained on approximately 17 million labeled ALS points, the deep CNN model can learn how a human operator recognizes a point as a ground point or not. The model performs better than typical existing algorithms in terms of error rate, indicating the significant potential of deep-learning-based methods in feature extraction from a point cloud.

ACS Style

Xiangyun Hu; Yi Yuan. Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud. Remote Sensing 2016, 8, 730 .

AMA Style

Xiangyun Hu, Yi Yuan. Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud. Remote Sensing. 2016; 8 (9):730.

Chicago/Turabian Style

Xiangyun Hu; Yi Yuan. 2016. "Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud." Remote Sensing 8, no. 9: 730.

Journal article
Published: 05 May 2016 in Remote Sensing
Reads 0
Downloads 0

Plane segmentation is an important step in feature extraction and 3D modeling from light detection and ranging (LiDAR) point cloud. The accuracy and speed of plane segmentation are two issues difficult to balance, particularly when dealing with a massive point cloud with millions of points. A fast and easy-to-implement algorithm of plane segmentation based on cross-line element growth (CLEG) is proposed in this study. The point cloud is converted into grid data. The points are segmented into line segments with the Douglas-Peucker algorithm. Each point is then assigned to a cross-line element (CLE) obtained by segmenting the points in the cross-directions. A CLE determines one plane, and this is the rationale of the algorithm. CLE growth and point growth are combined after selecting the seed CLE to obtain the segmented facets. The CLEG algorithm is validated by comparing it with popular methods, such as RANSAC, 3D Hough transformation, principal component analysis (PCA), iterative PCA, and a state-of-the-art global optimization-based algorithm. Experiments indicate that the CLEG algorithm runs much faster than the other algorithms. The method can produce accurate segmentation at a speed of 6 s per 3 million points. The proposed method also exhibits good accuracy.

ACS Style

Teng Wu; Xiangyun Hu; Lizhi Ye. Fast and Accurate Plane Segmentation of Airborne LiDAR Point Cloud Using Cross-Line Elements. Remote Sensing 2016, 8, 383 .

AMA Style

Teng Wu, Xiangyun Hu, Lizhi Ye. Fast and Accurate Plane Segmentation of Airborne LiDAR Point Cloud Using Cross-Line Elements. Remote Sensing. 2016; 8 (5):383.

Chicago/Turabian Style

Teng Wu; Xiangyun Hu; Lizhi Ye. 2016. "Fast and Accurate Plane Segmentation of Airborne LiDAR Point Cloud Using Cross-Line Elements." Remote Sensing 8, no. 5: 383.

Journal article
Published: 27 July 2015 in ISPRS International Journal of Geo-Information
Reads 0
Downloads 0

Web 2.0 enables a two-way interaction between servers and clients. GPS receivers become available to more citizens and are commonly found in vehicles and smart phones, enabling individuals to record and share their trajectory data on the Internet and edit them online. OpenStreetMap (OSM) makes it possible for citizens to contribute to the acquisition of geographic information. This paper studies the use of OSM data to find newly mapped or built roads that do not exist in a reference road map and create its updated version. For this purpose, we propose a progressive buffering method for determining an optimal buffer radius to detect the new roads in the OSM data. In the next step, the detected new roads are merged into the reference road maps geometrically, topologically, and semantically. Experiments with OSM data and reference road maps over an area of 8494 km2 in the city of Wuhan, China and five of its 5 km × 5 km areas are conducted to demonstrate the feasibility and effectiveness of the method. It is shown that the OSM data can add 11.96% or a total of 2008.6 km of new roads to the reference road maps with an average precision of 96.49% and an average recall of 97.63%.

ACS Style

Changyong Liu; Lian Xiong; Xiangyun Hu; Jie Shan. A Progressive Buffering Method for Road Map Update Using OpenStreetMap Data. ISPRS International Journal of Geo-Information 2015, 4, 1246 -1264.

AMA Style

Changyong Liu, Lian Xiong, Xiangyun Hu, Jie Shan. A Progressive Buffering Method for Road Map Update Using OpenStreetMap Data. ISPRS International Journal of Geo-Information. 2015; 4 (3):1246-1264.

Chicago/Turabian Style

Changyong Liu; Lian Xiong; Xiangyun Hu; Jie Shan. 2015. "A Progressive Buffering Method for Road Map Update Using OpenStreetMap Data." ISPRS International Journal of Geo-Information 4, no. 3: 1246-1264.

Journal article
Published: 09 December 2014 in Remote Sensing
Reads 0
Downloads 0

Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs), the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM) is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM), is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

ACS Style

Qi Chen; Mingwei Sun; Xiangyun Hu; Zuxun Zhang. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model. Remote Sensing 2014, 6, 12334 -12359.

AMA Style

Qi Chen, Mingwei Sun, Xiangyun Hu, Zuxun Zhang. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model. Remote Sensing. 2014; 6 (12):12334-12359.

Chicago/Turabian Style

Qi Chen; Mingwei Sun; Xiangyun Hu; Zuxun Zhang. 2014. "Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model." Remote Sensing 6, no. 12: 12334-12359.

Journal article
Published: 06 November 2014 in Remote Sensing
Reads 0
Downloads 0

Building change detection is useful for land management, disaster assessment, illegal building identification, urban growth monitoring, and geographic information database updating. This study proposes an automatic method that applies object-based analysis to multi-temporal point cloud data to detect building changes. The aim of this building change detection method is to identify areas that have changed and to obtain from-to information. In this method, the data are first preprocessed to generate two sets of digital surface models (DSMs), digital elevation models, and normalized DSMs from registered old and new point cloud data. Thereafter, on the basis of differential DSM, candidates for changed building objects are identified from the points in the smooth areas by using a connected component analysis technique. The random sample consensus fitting algorithm is then used to distinguish the true changed buildings from trees. The changed building objects are classified as “newly built”, “taller”, “demolished” or “lower” by using rule-based analysis. Finally, a test data set consisting of many buildings of different types in an 8.5 km2 area is selected for the experiment. In the test data set, the method correctly detects 97.8% of buildings larger than 50 m2. The accuracy of the method is 91.2%. Furthermore, to decrease the workload of subsequent manual checking of the result, the confidence index for each changed object is computed on the basis of object features.

ACS Style

Shiyan Pang; Xiangyun Hu; Zizheng Wang; Yihui Lu. Object-Based Analysis of Airborne LiDAR Data for Building Change Detection. Remote Sensing 2014, 6, 10733 -10749.

AMA Style

Shiyan Pang, Xiangyun Hu, Zizheng Wang, Yihui Lu. Object-Based Analysis of Airborne LiDAR Data for Building Change Detection. Remote Sensing. 2014; 6 (11):10733-10749.

Chicago/Turabian Style

Shiyan Pang; Xiangyun Hu; Zizheng Wang; Yihui Lu. 2014. "Object-Based Analysis of Airborne LiDAR Data for Building Change Detection." Remote Sensing 6, no. 11: 10733-10749.

Journal article
Published: 17 October 2014 in Sensors
Reads 0
Downloads 0

A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference.

ACS Style

Weiping Jiang; Li Wang; Xiaoji Niu; Quan Zhang; Hui Zhang; Min Tang; Xiangyun Hu. High-Precision Image Aided Inertial Navigation with Known Features: Observability Analysis and Performance Evaluation. Sensors 2014, 14, 19371 -19401.

AMA Style

Weiping Jiang, Li Wang, Xiaoji Niu, Quan Zhang, Hui Zhang, Min Tang, Xiangyun Hu. High-Precision Image Aided Inertial Navigation with Known Features: Observability Analysis and Performance Evaluation. Sensors. 2014; 14 (10):19371-19401.

Chicago/Turabian Style

Weiping Jiang; Li Wang; Xiaoji Niu; Quan Zhang; Hui Zhang; Min Tang; Xiangyun Hu. 2014. "High-Precision Image Aided Inertial Navigation with Known Features: Observability Analysis and Performance Evaluation." Sensors 14, no. 10: 19371-19401.