This page has only limited features, please log in for full access.

Unclaimed
Yue Wu
Key Lab of Intelligent Perception and Image Understanding of Ministry of Education of China, Xidian University, Xi'an, Shaanxi, China

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 06 August 2021 in IEEE Transactions on Knowledge and Data Engineering
Reads 0
Downloads 0

Graph Neural Network (GNN) is capable of applying deep neural networks to graph domains. Recently, Message Passing Neural Networks (MPNNs) have been proposed to generalize several existing graph neural networks into a unified framework. For graph representation learning, MPNNs firstly generate discriminative node representations using the message passing function and then read from the node representation space to generate a graph representation using the readout function. In this paper, we analyze the representation capacity of the MPNNs for aggregating graph information and observe that the existing approaches ignore the self-loop for graph representation learning, leading to limited representation capacity. To alleviate this issue, we introduce a simple yet effective propagation enhanced extension, Self-Connected Neural Message Passing (SC-NMP), which aggregates the node representations of the current step and the graph representation of the previous step. To further improve the information flow, we also propose a Densely Self-Connected Neural Message Passing that connects each layer to every other layer in a feed-forward fashion. Extensive experiments on various benchmark datasets strongly demonstrate the effectiveness, leading to superior performance for graph classification and regression tasks.

ACS Style

Xiaolong Fan; Maoguo Gong; Yue Wu; A. K. Qin; Yu Xie. Propagation Enhanced Neural Message Passing for Graph Representation Learning. IEEE Transactions on Knowledge and Data Engineering 2021, PP, 1 -1.

AMA Style

Xiaolong Fan, Maoguo Gong, Yue Wu, A. K. Qin, Yu Xie. Propagation Enhanced Neural Message Passing for Graph Representation Learning. IEEE Transactions on Knowledge and Data Engineering. 2021; PP (99):1-1.

Chicago/Turabian Style

Xiaolong Fan; Maoguo Gong; Yue Wu; A. K. Qin; Yu Xie. 2021. "Propagation Enhanced Neural Message Passing for Graph Representation Learning." IEEE Transactions on Knowledge and Data Engineering PP, no. 99: 1-1.

Journal article
Published: 09 June 2021 in IEEE Transactions on Neural Networks and Learning Systems
Reads 0
Downloads 0

As a unified framework for graph neural networks, message passing-based neural network (MPNN) has attracted a lot of research interest and has been shown successfully in a number of domains in recent years. However, because of over-smoothing and vanishing gradients, deep MPNNs are still difficult to train. To alleviate these issues, we first introduce a deep hierarchical layer aggregation (DHLA) strategy, which utilizes a block-based layer aggregation to aggregate representations from different layers and transfers the output of the previous block to the subsequent block, so that deeper MPNNs can be easily trained. Additionally, to stabilize the training process, we also develop a novel normalization strategy, neighbor normalization (NeighborNorm), which normalizes the neighbor of each node to further address the training issue in deep MPNNs. Our analysis reveals that NeighborNorm can smooth the gradient of the loss function, i.e., adding NeighborNorm makes the optimization landscape much easier to navigate. Experimental results on two typical graph pattern-recognition tasks, including node classification and graph classification, demonstrate the necessity and effectiveness of the proposed strategies for graph message-passing neural networks.

ACS Style

Xiaolong Fan; Maoguo Gong; Zedong Tang; Yue Wu. Deep Neural Message Passing With Hierarchical Layer Aggregation and Neighbor Normalization. IEEE Transactions on Neural Networks and Learning Systems 2021, PP, 1 -13.

AMA Style

Xiaolong Fan, Maoguo Gong, Zedong Tang, Yue Wu. Deep Neural Message Passing With Hierarchical Layer Aggregation and Neighbor Normalization. IEEE Transactions on Neural Networks and Learning Systems. 2021; PP (99):1-13.

Chicago/Turabian Style

Xiaolong Fan; Maoguo Gong; Zedong Tang; Yue Wu. 2021. "Deep Neural Message Passing With Hierarchical Layer Aggregation and Neighbor Normalization." IEEE Transactions on Neural Networks and Learning Systems PP, no. 99: 1-13.

Journal article
Published: 11 May 2021 in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Reads 0
Downloads 0

In this paper, we propose an effective method for remote sensing image registration. Point features are robust to remote sensing images with low quality, small overlapping areas, and local deformation. Therefore, we extract point features from remote sensing images and convert the problem of remote sensing image registration into the problem of feature point matching. A correspondence set constructed solely on the similar of features often contains many false correspondences or outliers, so our key idea is to remove the mismatches in the initial correspondence set and obtain a stable correspondence through a two-step strategy. Assuming the initial correspondence set is constructed according to the similarity of the feature points and descriptors. First, we construct a mathematical model based on the fact that the similarity of the feature distance of correct matches within the same physical region and the local topology consists of correct matches, and obtain its approximately correct solution in linear time. Then, we design a strategy to increase the number of inliers and raise the precision by a global constraint calculated from the solution in the previous step. Experiments on a variety of remote sensing image datasets demonstrate that our method is more robust and accurate than state-of-the-art methods.

ACS Style

Yue Wu; Zhenglei Xiao; Shaodi Liu; Qiguang Miao; Wenping Ma; Maoguo Gong; Fei Xie; Yang Zhang. A Two-Step Method for Remote Sensing Images Registration Based on Local and Global Constraints. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, PP, 1 -1.

AMA Style

Yue Wu, Zhenglei Xiao, Shaodi Liu, Qiguang Miao, Wenping Ma, Maoguo Gong, Fei Xie, Yang Zhang. A Two-Step Method for Remote Sensing Images Registration Based on Local and Global Constraints. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021; PP (99):1-1.

Chicago/Turabian Style

Yue Wu; Zhenglei Xiao; Shaodi Liu; Qiguang Miao; Wenping Ma; Maoguo Gong; Fei Xie; Yang Zhang. 2021. "A Two-Step Method for Remote Sensing Images Registration Based on Local and Global Constraints." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing PP, no. 99: 1-1.

Journal article
Published: 10 May 2021 in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Reads 0
Downloads 0

Change detection in heterogeneous remote sensing images is a challenging problem because it is hard to make a direct comparison in the original observation spaces and most methods rely on a set of manually labeled samples. In this paper, a spatially self-paced convolutional network (SSPCN) is constructed for change detection in an unsupervised way. Self-paced learning (SPL) is incorporated into convolutional networks to dynamically select reliable samples and learn the representation of the relations between the two heterogeneous images. In the proposed method, the pseudo labels are initialized by a classification based method and each sample is assigned to a weight to reflect the easiness of the sample. Then SPL is used to learn the easy samples at first and then gradually take more complex samples into account. In the training process, the sample weights are dynamically updated based on the network parameters. Finally, a binary change map is acquired based on the trained convolutional network. The proposed SSPCN has three main advantages compared to the traditional methods. First, the proposed method is robust to noisy samples because SSPCN involves the reliable samples into training. Second, the samples have different learning rates for converging to better values and the learning rates are dynamically changed based on the current sample weights during iterations. Finally, we take the spatial information among the samples into account for further enhancing the robustness of the proposed method. Experimental results on four pairs of heterogeneous remote sensing images confirm the effectiveness of the proposed technique.

ACS Style

Hao Li; Maoguo Gong; Mingyang Zhang; Yue Wu. Spatially Self-Paced Convolutional Networks for Change Detection in Heterogeneous Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, 14, 4966 -4979.

AMA Style

Hao Li, Maoguo Gong, Mingyang Zhang, Yue Wu. Spatially Self-Paced Convolutional Networks for Change Detection in Heterogeneous Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021; 14 (99):4966-4979.

Chicago/Turabian Style

Hao Li; Maoguo Gong; Mingyang Zhang; Yue Wu. 2021. "Spatially Self-Paced Convolutional Networks for Change Detection in Heterogeneous Images." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, no. 99: 4966-4979.

Journal article
Published: 18 February 2021 in IEEE Transactions on Neural Networks and Learning Systems
Reads 0
Downloads 0

Change detection based on heterogeneous images, such as optical images and synthetic aperture radar images, is a challenging problem because of their huge appearance differences. To combat this problem, we propose an unsupervised change detection method that contains only a convolutional autoencoder (CAE) for feature extraction and the commonality autoencoder for commonalities exploration. The CAE can eliminate a large part of redundancies in two heterogeneous images and obtain more consistent feature representations. The proposed commonality autoencoder has the ability to discover common features of ground objects between two heterogeneous images by transforming one heterogeneous image representation into another. The unchanged regions with the same ground objects share much more common features than the changed regions. Therefore, the number of common features can indicate changed regions and unchanged regions, and then a difference map can be calculated. At last, the change detection result is generated by applying a segmentation algorithm to the difference map. In our method, the network parameters of the commonality autoencoder are learned by the relevance of unchanged regions instead of the labels. Our experimental results on five real data sets demonstrate the promising performance of the proposed framework compared with several existing approaches.

ACS Style

Yue Wu; Jiaheng Li; Yongzhe Yuan; A. K. Qin; Qi-Guang Miao; Mao-Guo Gong. Commonality Autoencoder: Learning Common Features for Change Detection From Heterogeneous Images. IEEE Transactions on Neural Networks and Learning Systems 2021, PP, 1 -14.

AMA Style

Yue Wu, Jiaheng Li, Yongzhe Yuan, A. K. Qin, Qi-Guang Miao, Mao-Guo Gong. Commonality Autoencoder: Learning Common Features for Change Detection From Heterogeneous Images. IEEE Transactions on Neural Networks and Learning Systems. 2021; PP (99):1-14.

Chicago/Turabian Style

Yue Wu; Jiaheng Li; Yongzhe Yuan; A. K. Qin; Qi-Guang Miao; Mao-Guo Gong. 2021. "Commonality Autoencoder: Learning Common Features for Change Detection From Heterogeneous Images." IEEE Transactions on Neural Networks and Learning Systems PP, no. 99: 1-14.

Journal article
Published: 15 January 2021 in IEEE Transactions on Cybernetics
Reads 0
Downloads 0

The searching ability of the population-based search algorithms strongly relies on the coordinate system on which they are implemented. However, the widely used coordinate systems in the existing multifactorial optimization (MFO) algorithms are still fixed and might not be suitable for various function landscapes with differential modalities, rotations, and dimensions; thus, the intertask knowledge transfer might not be efficient. Therefore, this article proposes a novel intertask knowledge transfer strategy for MFOs implemented upon an active coordinate system that is established on a common subspace of two search spaces. The proper coordinate system might identify some common modality in a proper subspace to some extent. In this article, to seek the intermediate subspace, we innovatively introduce the geodesic flow that starts from a subspace, reaching another subspace in unit time. A low-dimension intermediate subspace is drawn from a uniform distribution defined on the geodesic flow, and the corresponding coordinate system is given. The intertask trial generation method is applied to the individuals by first projecting them on the low-dimension subspace, which reveals the important invariant features of the multiple function landscapes. Since intermediate subspace is generated from the major eigenvectors of tasks' spaces, this model turns out to be intrinsically regularized by neglecting the minor and small eigenvalues. Therefore, the transfer strategy can alleviate the influence of noise led by redundant dimensions. The proposed method exhibits promising performance in the experiments.

ACS Style

Zedong Tang; Maoguo Gong; Yue Wu; A. K. Qin; Kay Chen Tan. A Multifactorial Optimization Framework Based on Adaptive Intertask Coordinate System. IEEE Transactions on Cybernetics 2021, PP, 1 -14.

AMA Style

Zedong Tang, Maoguo Gong, Yue Wu, A. K. Qin, Kay Chen Tan. A Multifactorial Optimization Framework Based on Adaptive Intertask Coordinate System. IEEE Transactions on Cybernetics. 2021; PP (99):1-14.

Chicago/Turabian Style

Zedong Tang; Maoguo Gong; Yue Wu; A. K. Qin; Kay Chen Tan. 2021. "A Multifactorial Optimization Framework Based on Adaptive Intertask Coordinate System." IEEE Transactions on Cybernetics PP, no. 99: 1-14.

Journal article
Published: 30 December 2020 in Remote Sensing
Reads 0
Downloads 0

Recently, with the popularity of space-borne earth satellites, the resolution of high-resolution panchromatic (PAN) and multispectral (MS) remote sensing images is also increasing year by year, multiresolution remote sensing classification has become a research hotspot. In this paper, from the perspective of deep learning, we design a dual-branch interactive spatial-channel collaborative attention enhancement network (SCCA-net) for multiresolution classification. It aims to combine sample enhancement and feature enhancement to improve classification accuracy. In the part of sample enhancement, we propose an adaptive neighbourhood transfer sampling strategy (ANTSS). Different from the traditional pixel-centric sampling strategy with orthogonal sampling angle, our algorithm allows each patch to adaptively transfer the neighbourhood range by finding the homogeneous region of the pixel to be classified. And it also adaptively adjust the sampling angle according to the texture distribution of the homogeneous region to capture neighbourhood information that is more conducive for classification. Moreover, in the part of feature enhancement part, we design a local spatial attention module (LSA-module) for PAN data to highlight the spatial resolution advantages and a global channel attention module (GCA-module) for MS data to improve the multi-channel representation. It not only highlights the spatial resolution advantage of PAN data and the multi-channel advantage of MS data, but also improves the difference between features through the interaction between the two modules. Quantitative and qualitative experimental results verify the robustness and effectiveness of the method.

ACS Style

Wenping Ma; Jiliang Zhao; Hao Zhu; Jianchao Shen; Licheng Jiao; Yue Wu; Biao Hou. A Spatial-Channel Collaborative Attention Network for Enhancement of Multiresolution Classification. Remote Sensing 2020, 13, 106 .

AMA Style

Wenping Ma, Jiliang Zhao, Hao Zhu, Jianchao Shen, Licheng Jiao, Yue Wu, Biao Hou. A Spatial-Channel Collaborative Attention Network for Enhancement of Multiresolution Classification. Remote Sensing. 2020; 13 (1):106.

Chicago/Turabian Style

Wenping Ma; Jiliang Zhao; Hao Zhu; Jianchao Shen; Licheng Jiao; Yue Wu; Biao Hou. 2020. "A Spatial-Channel Collaborative Attention Network for Enhancement of Multiresolution Classification." Remote Sensing 13, no. 1: 106.

Review
Published: 29 December 2020 in International Journal of Automation and Computing
Reads 0
Downloads 0

In recent years, computational intelligence has been widely used in many fields and achieved remarkable performance. Evolutionary computing and deep learning are important branches of computational intelligence. Many methods based on evolutionary computation and deep learning have achieved good performance in remote sensing image registration. This paper introduces the application of computational intelligence in remote sensing image registration from the two directions of evolutionary computing and deep learning. In the part of remote sensing image registration based on evolutionary calculation, the principles of evolutionary algorithms and swarm intelligence algorithms are elaborated and their application in remote sensing image registration is discussed. The application of deep learning in remote sensing image registration is also discussed. At the same time, the development status and future of remote sensing image registration are summarized and their prospects are examined.

ACS Style

Yue Wu; Jun-Wei Liu; Chen-Zhuo Zhu; Zhuang-Fei Bai; Qi-Guang Miao; Wen-Ping Ma; Mao-Guo Gong. Computational Intelligence in Remote Sensing Image Registration: A survey. International Journal of Automation and Computing 2020, 18, 1 -17.

AMA Style

Yue Wu, Jun-Wei Liu, Chen-Zhuo Zhu, Zhuang-Fei Bai, Qi-Guang Miao, Wen-Ping Ma, Mao-Guo Gong. Computational Intelligence in Remote Sensing Image Registration: A survey. International Journal of Automation and Computing. 2020; 18 (1):1-17.

Chicago/Turabian Style

Yue Wu; Jun-Wei Liu; Chen-Zhuo Zhu; Zhuang-Fei Bai; Qi-Guang Miao; Wen-Ping Ma; Mao-Guo Gong. 2020. "Computational Intelligence in Remote Sensing Image Registration: A survey." International Journal of Automation and Computing 18, no. 1: 1-17.

Journal article
Published: 11 September 2020 in IEEE Transactions on Evolutionary Computation
Reads 0
Downloads 0

This paper proposes a novel and computationally efficient explicit inter-task information transfer strategy between optimization tasks by aligning the subspaces. In evolutionary multitasking, the tasks might have biases embedded in function landscapes and decision spaces, which often causes the threat of predominantly negative transfer. However, the complementary information among different tasks can give an enhanced performance of solving complicated problems when properly harnessed. In this paper, we distill this insight by introducing an inter-task knowledge transfer strategy implemented in the low-dimension subspaces via a learnable alignment matrix. Specifically, to unveil the significant features of the function landscapes, the task-specific low-dimension subspaces is established based on the distribution information of subpopulations possessed by tasks respectively. Next, the alignment matrix between pairwise subspaces is learned by minimizing the discrepancies of the subspaces. Given the aligned subspaces by applying the alignment matrix to subspaces’ base vectors, the individuals from different tasks are then projected into aligned subspaces and reproduce therein. Moreover, since this method only considers the leading eigenvectors, it turns out to be intrinsically regularized and noise-insensitive. Comprehensive experiments are conducted on the synthetic and practical benchmark problems so as to assess the efficacy of the proposed method. According to the experimental results, the proposed method exhibits a superior performance compared with existing evolutionary multitask optimization algorithms.

ACS Style

Zedong Tang; Maoguo Gong; Yue Wu; Wenfeng Liu; Yu Xie. Regularized Evolutionary Multitask Optimization: Learning to Intertask Transfer in Aligned Subspace. IEEE Transactions on Evolutionary Computation 2020, 25, 262 -276.

AMA Style

Zedong Tang, Maoguo Gong, Yue Wu, Wenfeng Liu, Yu Xie. Regularized Evolutionary Multitask Optimization: Learning to Intertask Transfer in Aligned Subspace. IEEE Transactions on Evolutionary Computation. 2020; 25 (2):262-276.

Chicago/Turabian Style

Zedong Tang; Maoguo Gong; Yue Wu; Wenfeng Liu; Yu Xie. 2020. "Regularized Evolutionary Multitask Optimization: Learning to Intertask Transfer in Aligned Subspace." IEEE Transactions on Evolutionary Computation 25, no. 2: 262-276.

Journal article
Published: 30 June 2020 in Remote Sensing
Reads 0
Downloads 0

Adversarial training has demonstrated advanced capabilities for generating image models. In this paper, we propose a deep neural network, named a classified adversarial network (CAN), for multi-spectral image change detection. This network is based on generative adversarial networks (GANs). The generator captures the distribution of the bitemporal multi-spectral image data and transforms it into change detection results, and these change detection results (as the fake data) are input into the discriminator to train the discriminator. The results obtained by pre-classification are also input into the discriminator as the real data. The adversarial training can facilitate the generator learning the transformation from a bitemporal image to a change map. When the generator is trained well, the generator has the ability to generate the final result. The bitemporal multi-spectral images are input into the generator, and then the final change detection results are obtained from the generator. The proposed method is completely unsupervised, and we only need to input the preprocessed data that were obtained from the pre-classification and training sample selection. Through adversarial training, the generator can better learn the relationship between the bitemporal multi-spectral image data and the corresponding labels. Finally, the well-trained generator can be applied to process the raw bitemporal multi-spectral images to obtain the final change map (CM). The effectiveness and robustness of the proposed method were verified by the experimental results on the real high-resolution multi-spectral image data sets.

ACS Style

Yue Wu; Zhuangfei Bai; Qiguang Miao; Wenping Ma; Yuelei Yang; Maoguo Gong. A Classified Adversarial Network for Multi-Spectral Remote Sensing Image Change Detection. Remote Sensing 2020, 12, 2098 .

AMA Style

Yue Wu, Zhuangfei Bai, Qiguang Miao, Wenping Ma, Yuelei Yang, Maoguo Gong. A Classified Adversarial Network for Multi-Spectral Remote Sensing Image Change Detection. Remote Sensing. 2020; 12 (13):2098.

Chicago/Turabian Style

Yue Wu; Zhuangfei Bai; Qiguang Miao; Wenping Ma; Yuelei Yang; Maoguo Gong. 2020. "A Classified Adversarial Network for Multi-Spectral Remote Sensing Image Change Detection." Remote Sensing 12, no. 13: 2098.

Journal article
Published: 09 June 2020 in Remote Sensing
Reads 0
Downloads 0

Traditional change detection (CD) methods operate in the simple image domain or hand-crafted features, which has less robustness to the inconsistencies (e.g., brightness and noise distribution, etc.) between bitemporal satellite images. Recently, deep learning techniques have reported compelling performance on robust feature learning. However, generating accurate semantic supervision that reveals real change information in satellite images still remains challenging, especially for manual annotation. To solve this problem, we propose a novel self-supervised representation learning method based on temporal prediction for remote sensing image CD. The main idea of our algorithm is to transform two satellite images into more consistent feature representations through a self-supervised mechanism without semantic supervision and any additional computations. Based on the transformed feature representations, a better difference image (DI) can be obtained, which reduces the propagated error of DI on the final detection result. In the self-supervised mechanism, the network is asked to identify different sample patches between two temporal images, namely, temporal prediction. By designing the network for the temporal prediction task to imitate the discriminator of generative adversarial networks, the distribution-aware feature representations are automatically captured and the result with powerful robustness can be acquired. Experimental results on real remote sensing data sets show the effectiveness and superiority of our method, improving the detection precision up to 0.94–35.49%.

ACS Style

Huihui Dong; Wenping Ma; Yue Wu; Jun Zhang; Licheng Jiao. Self-Supervised Representation Learning for Remote Sensing Image Change Detection Based on Temporal Prediction. Remote Sensing 2020, 12, 1868 .

AMA Style

Huihui Dong, Wenping Ma, Yue Wu, Jun Zhang, Licheng Jiao. Self-Supervised Representation Learning for Remote Sensing Image Change Detection Based on Temporal Prediction. Remote Sensing. 2020; 12 (11):1868.

Chicago/Turabian Style

Huihui Dong; Wenping Ma; Yue Wu; Jun Zhang; Licheng Jiao. 2020. "Self-Supervised Representation Learning for Remote Sensing Image Change Detection Based on Temporal Prediction." Remote Sensing 12, no. 11: 1868.

Journal article
Published: 21 January 2020 in IEEE Access
Reads 0
Downloads 0

The robustness and accuracy of feature descriptor are two essential factors in the process of image registration. Existing feature descriptors can extract important image features, but it may be difficult to find enough correct correspondences for sophisticated images. And these feature descriptors often require domain expertise and human intervention. The aim of this paper is to utilise Genetic Programming (GP) to automatically evolve feature descriptors which are adaptive to various images including remote sensing images and optical images. In this paper, a novel GP-based method (GPFD) is proposed to extract feature vectors and evolve image descriptors for image registration without supervision. The proposed method designs a set of simple arithmetic operators and first-order statistics to construct feature descriptors in order to reduce noise interference. The performance of the proposed method is evaluated and compared against five methods including SIFT, SURF, RIFT, GLPM and GP. These results demonstrate that the feature descriptors evolved by GPFD are robust to complex geometric transformation, the illumination difference and noise.

ACS Style

Yue Wu; Qingxiu Su; Wenping Ma; Shaodi Liu; Qiguang Miao. Learning Robust Feature Descriptor for Image Registration With Genetic Programming. IEEE Access 2020, 8, 39389 -39402.

AMA Style

Yue Wu, Qingxiu Su, Wenping Ma, Shaodi Liu, Qiguang Miao. Learning Robust Feature Descriptor for Image Registration With Genetic Programming. IEEE Access. 2020; 8 (99):39389-39402.

Chicago/Turabian Style

Yue Wu; Qingxiu Su; Wenping Ma; Shaodi Liu; Qiguang Miao. 2020. "Learning Robust Feature Descriptor for Image Registration With Genetic Programming." IEEE Access 8, no. 99: 39389-39402.

Journal article
Published: 10 January 2020 in Remote Sensing
Reads 0
Downloads 0

With the increasing resolution of optical remote sensing images, ship detection in optical remote sensing images has attracted a lot of research interests. The current ship detection methods usually adopt the coarse-to-fine detection strategy, which firstly extracts low-level and manual features, and then performs multi-step training. Inadequacies of this strategy are that it would produce complex calculation, false detection on land and difficulty in detecting the small size ship. Aiming at these problems, a sea-land separation algorithm that combines gradient information and gray information is applied to avoid false alarms on land, the feature pyramid network (FPN) is used to achieve small ship detection, and a multi-scale detection strategy is proposed to achieve ship detection with different degrees of refinement. Then the feature extraction structure is adopted to fuse different hierarchical features to improve the representation ability of features. Finally, we propose a new coarse-to-fine ship detection network (CF-SDN) that directly achieves an end-to-end mapping from image pixels to bounding boxes with confidences. A coarse-to-fine detection strategy is applied to improve the classification ability of the network. Experimental results on optical remote sensing image set indicate that the proposed method outperforms the other excellent detection algorithms and achieves good detection performance on images including some small-sized ships and dense ships near the port.

ACS Style

Yue Wu; Wenping Ma; Maoguo Gong; Zhuangfei Bai; Wei Zhao; Qiongqiong Guo; Xiaobo Chen; Qiguang Miao. A Coarse-to-Fine Network for Ship Detection in Optical Remote Sensing Images. Remote Sensing 2020, 12, 246 .

AMA Style

Yue Wu, Wenping Ma, Maoguo Gong, Zhuangfei Bai, Wei Zhao, Qiongqiong Guo, Xiaobo Chen, Qiguang Miao. A Coarse-to-Fine Network for Ship Detection in Optical Remote Sensing Images. Remote Sensing. 2020; 12 (2):246.

Chicago/Turabian Style

Yue Wu; Wenping Ma; Maoguo Gong; Zhuangfei Bai; Wei Zhao; Qiongqiong Guo; Xiaobo Chen; Qiguang Miao. 2020. "A Coarse-to-Fine Network for Ship Detection in Optical Remote Sensing Images." Remote Sensing 12, no. 2: 246.

Journal article
Published: 02 January 2020 in Remote Sensing
Reads 0
Downloads 0

Because there are many unlabeled samples in hyperspectral images and the cost of manual labeling is high, this paper adopts semi-supervised learning method to make full use of many unlabeled samples. In addition, those hyperspectral images contain much spectral information and the convolutional neural networks have great ability in representation learning. This paper proposes a novel semi-supervised hyperspectral image classification framework which utilizes self-training to gradually assign highly confident pseudo labels to unlabeled samples by clustering and employs spatial constraints to regulate self-training process. Spatial constraints are introduced to exploit the spatial consistency within the image to correct and re-assign the mistakenly classified pseudo labels. Through the process of self-training, the sample points of high confidence are gradually increase, and they are added to the corresponding semantic classes, which makes semantic constraints gradually enhanced. At the same time, the increase in high confidence pseudo labels also contributes to regional consistency within hyperspectral images, which highlights the role of spatial constraints and improves the HSIc efficiency. Extensive experiments in HSIc demonstrate the effectiveness, robustness, and high accuracy of our approach.

ACS Style

Yue Wu; Guifeng Mu; Can Qin; Qiguang Miao; Wenping Ma; Xiangrong Zhang. Semi-Supervised Hyperspectral Image Classification via Spatial-Regulated Self-Training. Remote Sensing 2020, 12, 159 .

AMA Style

Yue Wu, Guifeng Mu, Can Qin, Qiguang Miao, Wenping Ma, Xiangrong Zhang. Semi-Supervised Hyperspectral Image Classification via Spatial-Regulated Self-Training. Remote Sensing. 2020; 12 (1):159.

Chicago/Turabian Style

Yue Wu; Guifeng Mu; Can Qin; Qiguang Miao; Wenping Ma; Xiangrong Zhang. 2020. "Semi-Supervised Hyperspectral Image Classification via Spatial-Regulated Self-Training." Remote Sensing 12, no. 1: 159.

Journal article
Published: 01 June 2019 in Remote Sensing
Reads 0
Downloads 0

Recently, Hyperspectral Image (HSI) classification has gradually been getting attention from more and more researchers. HSI has abundant spectral and spatial information; thus, how to fuse these two types of information is still a problem worth studying. In this paper, to extract spectral and spatial feature, we propose a Double-Branch Multi-Attention mechanism network (DBMA) for HSI classification. This network has two branches to extract spectral and spatial feature respectively which can reduce the interference between the two types of feature. Furthermore, with respect to the different characteristics of these two branches, two types of attention mechanism are applied in the two branches respectively, which ensures to extract more discriminative spectral and spatial feature. The extracted features are then fused for classification. A lot of experiment results on three hyperspectral datasets shows that the proposed method performs better than the state-of-the-art method.

ACS Style

Wenping Ma; Qifan Yang; Yue Wu; Wei Zhao; Xiangrong Zhang. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sensing 2019, 11, 1307 .

AMA Style

Wenping Ma, Qifan Yang, Yue Wu, Wei Zhao, Xiangrong Zhang. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sensing. 2019; 11 (11):1307.

Chicago/Turabian Style

Wenping Ma; Qifan Yang; Yue Wu; Wei Zhao; Xiangrong Zhang. 2019. "Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification." Remote Sensing 11, no. 11: 1307.

Journal article
Published: 27 March 2019 in Remote Sensing
Reads 0
Downloads 0

Object detection in optical remote sensing images is still a challenging task because of the complexity of the images. The diversity and complexity of geospatial object appearance and the insufficient understanding of geospatial object spatial structure information are still the existing problems. In this paper, we propose a novel multi-model decision fusion framework which takes contextual information and multi-region features into account for addressing those problems. First, a contextual information fusion sub-network is designed to fuse both local contextual features and object-object relationship contextual features so as to deal with the problem of the diversity and complexity of geospatial object appearance. Second, a part-based multi-region fusion sub-network is constructed to merge multiple parts of an object for obtaining more spatial structure information about the object, which helps to handle the problem of the insufficient understanding of geospatial object spatial structure information. Finally, a decision fusion is made on all sub-networks to improve the stability and robustness of the model and achieve better detection performance. The experimental results on a publicly available ten class data set show that the proposed method is effective for geospatial object detection.

ACS Style

Wenping Ma; Qiongqiong Guo; Yue Wu; Wei Zhao; Xiangrong Zhang; Licheng Jiao. A Novel Multi-Model Decision Fusion Network for Object Detection in Remote Sensing Images. Remote Sensing 2019, 11, 737 .

AMA Style

Wenping Ma, Qiongqiong Guo, Yue Wu, Wei Zhao, Xiangrong Zhang, Licheng Jiao. A Novel Multi-Model Decision Fusion Network for Object Detection in Remote Sensing Images. Remote Sensing. 2019; 11 (7):737.

Chicago/Turabian Style

Wenping Ma; Qiongqiong Guo; Yue Wu; Wei Zhao; Xiangrong Zhang; Licheng Jiao. 2019. "A Novel Multi-Model Decision Fusion Network for Object Detection in Remote Sensing Images." Remote Sensing 11, no. 7: 737.

Journal article
Published: 14 March 2019 in Remote Sensing
Reads 0
Downloads 0

Homogeneous image change detection research has been well developed, and many methods have been proposed. However, change detection between heterogeneous images is challenging since heterogeneous images are in different domains. Therefore, direct heterogeneous image comparison in the way that we do it is difficult. In this paper, a method for heterogeneous synthetic aperture radar (SAR) image and optical image change detection is proposed, which is based on a pixel-level mapping method and a capsule network with a deep structure. The mapping method proposed transforms an image from one feature space to another feature space. Then, the images can be compared directly in a similarly transformed space. In the mapping process, some image blocks in unchanged areas are selected, and these blocks are only a small part of the image. Then, the weighted parameters are acquired by calculating the Euclidean distances between the pixel to be transformed and the pixels in these blocks. The Euclidean distance calculated according to the weighted coordinates is taken as the pixel gray value in another feature space. The other image is transformed in a similar manner. In the transformed feature space, these images are compared, and the fusion of the two different images is achieved. The two experimental images are input to a capsule network, which has a deep structure. The image fusion result is taken as the training labels. The training samples are selected according to the ratio of the center pixel label and its neighboring pixels’ labels. The capsule network can improve the detection result and suppress noise. Experiments on remote sensing datasets show the final detection results, and the proposed method obtains a satisfactory performance.

ACS Style

Wenping Ma; Yunta Xiong; Yue Wu; Hui Yang; Xiangrong Zhang; Licheng Jiao. Change Detection in Remote Sensing Images Based on Image Mapping and a Deep Capsule Network. Remote Sensing 2019, 11, 626 .

AMA Style

Wenping Ma, Yunta Xiong, Yue Wu, Hui Yang, Xiangrong Zhang, Licheng Jiao. Change Detection in Remote Sensing Images Based on Image Mapping and a Deep Capsule Network. Remote Sensing. 2019; 11 (6):626.

Chicago/Turabian Style

Wenping Ma; Yunta Xiong; Yue Wu; Hui Yang; Xiangrong Zhang; Licheng Jiao. 2019. "Change Detection in Remote Sensing Images Based on Image Mapping and a Deep Capsule Network." Remote Sensing 11, no. 6: 626.

Journal article
Published: 21 February 2019 in IEEE Transactions on Geoscience and Remote Sensing
Reads 0
Downloads 0

Automatic remote sensing image registration has achieved great accomplishment. However, it is still a vital challenging problem to develop a robust and accurate registration method due to the negative effects of noise and imaging differences between images. For these images, it is difficult to guarantee the accuracy and robustness at the same time for one-step registration methods. To address this issue, we introduce an effective coarse-to-fine strategy and develop a new two-step registration method based on deep and local features in this paper. The first step is to calculate the approximate spatial relationship, which is obtained by a convolutional neural network. This step makes full use of the deep features to match and can generate stable results. For the second step, a matching strategy considering spatial relationship is applied to the local feature-based method. In addition, this step adopts more accurate features in location to adjust the results of the previous step. A variety of homologous and multimodal remote sensing images, including optical, synthetic aperture radar, and general map images, are used to evaluate the proposed method. The comparison experiments demonstrate that our method can apparently increase the correct correspondences, can improve the ratio of correct correspondences, and is highly robust and accurate.

ACS Style

Wenping Ma; Jun Zhang; Yue Wu; Licheng Jiao; Hao Zhu; Wei Zhao. A Novel Two-Step Registration Method for Remote Sensing Images Based on Deep and Local Features. IEEE Transactions on Geoscience and Remote Sensing 2019, 57, 4834 -4843.

AMA Style

Wenping Ma, Jun Zhang, Yue Wu, Licheng Jiao, Hao Zhu, Wei Zhao. A Novel Two-Step Registration Method for Remote Sensing Images Based on Deep and Local Features. IEEE Transactions on Geoscience and Remote Sensing. 2019; 57 (7):4834-4843.

Chicago/Turabian Style

Wenping Ma; Jun Zhang; Yue Wu; Licheng Jiao; Hao Zhu; Wei Zhao. 2019. "A Novel Two-Step Registration Method for Remote Sensing Images Based on Deep and Local Features." IEEE Transactions on Geoscience and Remote Sensing 57, no. 7: 4834-4843.

Journal article
Published: 12 January 2019 in Remote Sensing
Reads 0
Downloads 0

In this paper, a novel change detection approach based on multi-grained cascade forest(gcForest) and multi-scale fusion for synthetic aperture radar (SAR) images is proposed. It detectsthe changed and unchanged areas of the images by using the well-trained gcForest. Most existingchange detection methods need to select the appropriate size of the image block. However, thesingle size image block only provides a part of the local information, and gcForest cannot achieve agood effect on the image representation learning ability. Therefore, the proposed approach choosesdifferent sizes of image blocks as the input of gcForest, which can learn more image characteristicsand reduce the influence of the local information of the image on the classification result as well.In addition, in order to improve the detection accuracy of those pixels whose gray value changesabruptly, the proposed approach combines gradient information of the difference image with theprobability map obtained from the well-trained gcForest. Therefore, the image edge information canbe enhanced and the accuracy of edge detection can be improved by extracting the image gradientinformation. Experiments on four data sets indicate that the proposed approach outperforms otherstate-of-the-art algorithms.

ACS Style

Wenping Ma; Hui Yang; Yue Wu; Yunta Xiong; Tao Hu; Licheng Jiao; Biao Hou. Change Detection Based on Multi-Grained Cascade Forest and Multi-Scale Fusion for SAR Images. Remote Sensing 2019, 11, 142 .

AMA Style

Wenping Ma, Hui Yang, Yue Wu, Yunta Xiong, Tao Hu, Licheng Jiao, Biao Hou. Change Detection Based on Multi-Grained Cascade Forest and Multi-Scale Fusion for SAR Images. Remote Sensing. 2019; 11 (2):142.

Chicago/Turabian Style

Wenping Ma; Hui Yang; Yue Wu; Yunta Xiong; Tao Hu; Licheng Jiao; Biao Hou. 2019. "Change Detection Based on Multi-Grained Cascade Forest and Multi-Scale Fusion for SAR Images." Remote Sensing 11, no. 2: 142.

Journal article
Published: 24 December 2018 in IEEE Access
Reads 0
Downloads 0

In this paper we present a novel CNN-based model for change detection in synthetic aperture radar (SAR) images. Considering change detection task takes image pairs as input, we first explore multiple neural network architectures, which are specifically adapted to change detection task. There are several ways in which patch pairs can be processed by the network and how information sharing can efficiently learn semantic difference between changed and unchanged pixels. For this reason, we then design a "Siamese samples" convolutional neural network, which treats patch pairs as indiscriminate samples to extract descriptors and then joins for their outputs. During training the two patch features are extracted by the same network instead of separate sub-networks, while the joining neuron measures the distance between the two feature vectors. Due to "pseudo-labels" with high accuracy is difficult to obtain, we modify joint classifier based on fuzzy c-means method (JFCM) into joint-similarity classifier (JSC) as preclassification to obtain coarse "pseudo labels", and discard sample selection. Thus, the preclassification labels with low accuracy are used to fine-tune the network. Finally, a significantly improved change detection result can be obtained from the network. The proposed architecture provides a better trade-off in terms of speed and accuracy among its counterparts (Siamese, Pseudo-Siamese and 2-Channel networks [1]). Experiments on several real SAR data sets demonstrate the state-of-the-art performance of the proposed method compared to advanced change detection methods.

ACS Style

Huihui Dong; Wenping Ma; Yue Wu; Maoguo Gong; Licheng Jiao. Local Descriptor Learning for Change Detection in Synthetic Aperture Radar Images via Convolutional Neural Networks. IEEE Access 2018, 7, 15389 -15403.

AMA Style

Huihui Dong, Wenping Ma, Yue Wu, Maoguo Gong, Licheng Jiao. Local Descriptor Learning for Change Detection in Synthetic Aperture Radar Images via Convolutional Neural Networks. IEEE Access. 2018; 7 (99):15389-15403.

Chicago/Turabian Style

Huihui Dong; Wenping Ma; Yue Wu; Maoguo Gong; Licheng Jiao. 2018. "Local Descriptor Learning for Change Detection in Synthetic Aperture Radar Images via Convolutional Neural Networks." IEEE Access 7, no. 99: 15389-15403.