This page has only limited features, please log in for full access.
Exposure to air pollution has been suggested to be associated with an increased risk of women’s health disorders. However, it remains unknown to what extent changes in ambient air pollution affect gynecological cancer. In our case–control study, the logistic regression model was combined with the restricted cubic spline to examine the association of short-term exposure to air pollution with gynecological cancer events using the clinical data of 35,989 women in Beijing from December 2008 to December 2017. We assessed the women’s exposure to air pollutants using the monitor located nearest to each woman’s residence and working places, adjusting for age, occupation, ambient temperature, and ambient humidity. The adjusted odds ratios (ORs) were examined to evaluate gynecologic cancer risk in six time windows (Phase 1–Phase 6) of women’s exposure to air pollutants (PM2.5, CO, O3, and SO2) and the highest ORs were found in Phase 4 (240 days). Then, the higher adjusted ORs were found associated with the increased concentrations of each pollutant (PM2.5, CO, O3, and SO2) in Phase 4. For instance, the adjusted OR of gynecological cancer risk for a 1.0-mg m−3 increase in CO exposures was 1.010 (95% CI: 0.881–1.139) below 0.8 mg m−3, 1.032 (95% CI: 0.871–1.194) at 0.8–1.0 mg m−3, 1.059 (95% CI: 0.973–1.145) at 1.0–1.4 mg m−3, and 1.120 (95% CI: 0.993–1.246) above 1.4 mg m−3. The ORs calculated in different air pollution levels accessed us to identify the nonlinear association between women’s exposure to air pollutants (PM2.5, CO, O3, and SO2) and the gynecological cancer risk. This study supports that the gynecologic risks associated with air pollution should be considered in improved public health preventive measures and policymaking to minimize the dangerous effects of air pollution.
Qiwei Yu; Liqiang Zhang; Kun Hou; Jingwen Li; Suhong Liu; Ke Huang; Yang Cheng. Relationship between Air Pollutant Exposure and Gynecologic Cancer Risk. International Journal of Environmental Research and Public Health 2021, 18, 5353 .
AMA StyleQiwei Yu, Liqiang Zhang, Kun Hou, Jingwen Li, Suhong Liu, Ke Huang, Yang Cheng. Relationship between Air Pollutant Exposure and Gynecologic Cancer Risk. International Journal of Environmental Research and Public Health. 2021; 18 (10):5353.
Chicago/Turabian StyleQiwei Yu; Liqiang Zhang; Kun Hou; Jingwen Li; Suhong Liu; Ke Huang; Yang Cheng. 2021. "Relationship between Air Pollutant Exposure and Gynecologic Cancer Risk." International Journal of Environmental Research and Public Health 18, no. 10: 5353.
Land-use mapping (LUM) using high spatial resolution remote sensing images (HSR-RSIs) is a challenging and crucial technologies. However, due to the characteristics of HSR-RSIs such as different image acquisition conditions and massive, detailed information, performing LUM faces unique scientific challenges. With the emergence of new deep learning (DL) algorithms in recent years, methods to LUM with DL have achieved huge breakthroughs, which offers novel opportunities for the development of LUM for HSR-RSIs. This paper aims to provide a thorough review of recent achievements in this field. Existing high spatial resolution datasets in the research of semantic segmentation and single object segmentation are presented firstly. Next, we introduce several basic DL approaches that are frequently adopted for LUM. After briefly introduce traditional LUM methods, we review DL-based LUM methods comprehensively, highlighting these contributions of researchers in the field of LUM for HSR-RSIs. Individually, these DL-based approaches are summarized based on two LUM criteria. The first one is supervised learning, semi-supervised learning, or unsupervised learning, while another one is pixel-based or object-based. We then briefly review the fundamentals and the developments of the development of semantic segmentation and single object segmentation. At last, quantitative results that experiment on the dataset of ISPRS Vaihingen and ISPRS Potsdam are given for several representative models such as FCN and U-Net, following up with a comparison and discussion of the results.
Ning Zang; Yun Cao; Yuebin Wang; Bo Huang; Liqiang Zhang; P. Takis Mathiopoulos. Land-Use Mapping for High-Spatial Resolution Remote Sensing Image Via Deep Learning: A Review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, 14, 5372 -5391.
AMA StyleNing Zang, Yun Cao, Yuebin Wang, Bo Huang, Liqiang Zhang, P. Takis Mathiopoulos. Land-Use Mapping for High-Spatial Resolution Remote Sensing Image Via Deep Learning: A Review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021; 14 (99):5372-5391.
Chicago/Turabian StyleNing Zang; Yun Cao; Yuebin Wang; Bo Huang; Liqiang Zhang; P. Takis Mathiopoulos. 2021. "Land-Use Mapping for High-Spatial Resolution Remote Sensing Image Via Deep Learning: A Review." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, no. 99: 5372-5391.
Poverty alleviation is one of the greatest challenges faced by low-income and middle-income countries. China, which had the largest rural poverty-stricken population, has made tremendous efforts in alleviating poverty especially since the implementation of the targeted poverty alleviation (TPA) policy in 2014. Yet it remains unknown about the successfulness of the policy, because the official statistics are not timely available and in some cases questionable. This study combines deep learning with multiple satellite datasets to estimate county-level economic development from 2008 to 2019 and assess the effect of the TPA policy for 592 national poverty-stricken counties (NPCs) at country, provincial and county levels. Per capita gross domestic product (GDP) is used to measure the affluence level. From 2014 through 2019, the 592 NPCs experience an average growth rate of per capita GDP at 7.6%±0.4%, higher than the average growth rate of 310 adjacent non-NPC counties (7.3%±0.4%) and of the whole country (6.3%). This indicates an overall success of TPA policy so far. We also reveal 42 counties with weak growth recently and that the average affluence level of the NPCs in 2019 is still much lower than the national or provincial averages. The inexpensive, timely and accurate method proposed here can be applied to other low-income and middle-income countries for affluence assessment.
Liqiang Zhang; Yanxiao Jiang; Yang Li; Alicia J Zhou; Jing Cao; Suhong Liu; Yuebin Wang; Zhiqiang Xiao. Assessment of county-level poverty alleviation progress by deep learning and satellite observations. 2021, 1 .
AMA StyleLiqiang Zhang, Yanxiao Jiang, Yang Li, Alicia J Zhou, Jing Cao, Suhong Liu, Yuebin Wang, Zhiqiang Xiao. Assessment of county-level poverty alleviation progress by deep learning and satellite observations. . 2021; ():1.
Chicago/Turabian StyleLiqiang Zhang; Yanxiao Jiang; Yang Li; Alicia J Zhou; Jing Cao; Suhong Liu; Yuebin Wang; Zhiqiang Xiao. 2021. "Assessment of county-level poverty alleviation progress by deep learning and satellite observations." , no. : 1.
Metro subway systems with underground tunnels form the backbone of urban transportations and therefore, accurate monitoring and maintenance of such subway systems are extremely necessary for a hassle-free daily commutation of billions of people. Though 3-D models of tunnels are widely used for the deformation monitoring of such subway tunnels, existing model-based tunnel monitoring systems rely on coarse geometric models and hence fail to capture complete tunnel health information. We present a two-stage algorithm to create high-fidelity geometric models of tunnel lining from Terrestrial Laser Scanning (TLS) point clouds. Tunnel geometry, defined at the detailed block entity level, is constructed through a data-driven block segmentation algorithm and a model-driven assembly technique. In our approach, the 3-D tunnel block segmentation problem has been translated into a bolt and lining joint recognition problem from 2-D images unfolded from the 3-D scans. The segmented 3-D blocks are matched with a set of predefined 3-D templates from a primitive library via a constraint total least squares matching method and the matched 3-D templates are assembled to create the final watertight tunnel model. The proposed tunnel modeling method has been comprehensively evaluated on Changzhou, Nanjing, and Wuhan tunnel data sets in terms of outliers, missing data, point density, topological representation, robustness, and geometric accuracy. The experiments on Nanjing and Changzhou metro tunnels show that the geometric model fitting incurs an error of only 7 mm, which is almost consistent with a mean density of 6 mm of these two data sets. Experimental results validate the advantages and potentials of the proposed tunnel modeling method.
Zhen Cao; Dong Chen; Jiju Peethambaran; Zhenxin Zhang; Shaobo Xia; Liqiang Zhang. Tunnel Reconstruction With Block Level Precision by Combining Data-Driven Segmentation and Model-Driven Assembly. IEEE Transactions on Geoscience and Remote Sensing 2021, PP, 1 -20.
AMA StyleZhen Cao, Dong Chen, Jiju Peethambaran, Zhenxin Zhang, Shaobo Xia, Liqiang Zhang. Tunnel Reconstruction With Block Level Precision by Combining Data-Driven Segmentation and Model-Driven Assembly. IEEE Transactions on Geoscience and Remote Sensing. 2021; PP (99):1-20.
Chicago/Turabian StyleZhen Cao; Dong Chen; Jiju Peethambaran; Zhenxin Zhang; Shaobo Xia; Liqiang Zhang. 2021. "Tunnel Reconstruction With Block Level Precision by Combining Data-Driven Segmentation and Model-Driven Assembly." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-20.
To better understand scene images in the field of remote sensing, multi-label annotation of scene images is necessary. Moreover, to enhance the performance of deep learning models for dealing with semantic scene understanding tasks, it is vital to train them on large-scale annotated data. However, most existing datasets are annotated by a single label, which cannot describe the complex remote sensing images well because scene images might have multiple land cover classes. Few multi-label high spatial resolution remote sensing datasets have been developed to train deep learning models for multi-label based tasks, such as scene classification and image retrieval. To address this issue, in this paper, we construct a multi-label high spatial resolution remote sensing dataset named MLRSNet for semantic scene understanding with deep learning from the overhead perspective. It is composed of high-resolution optical satellite or aerial images. MLRSNet contains a total of 109,161 samples within 46 scene categories, and each image has at least one of 60 predefined labels. We have designed visual recognition tasks, including multi-label based image classification and image retrieval, in which a wide variety of deep learning approaches are evaluated with MLRSNet. The experimental results demonstrate that MLRSNet is a significant benchmark for future research, and it complements the current widely used datasets such as ImageNet, which fills gaps in multi-label image research. Furthermore, we will continue to expand the MLRSNet. MLRSNet and all related materials have been made publicly available at https://data.mendeley.com/datasets/7j9bv9vwsx/1 and https://github.com/cugbrs/MLRSNet.git.
Xiaoman Qi; Panpan Zhu; Yuebin Wang; Liqiang Zhang; Junhuan Peng; Mengfan Wu; Jialong Chen; Xudong Zhao; Ning Zang; P. Takis Mathiopoulos. MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 169, 337 -350.
AMA StyleXiaoman Qi, Panpan Zhu, Yuebin Wang, Liqiang Zhang, Junhuan Peng, Mengfan Wu, Jialong Chen, Xudong Zhao, Ning Zang, P. Takis Mathiopoulos. MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding. ISPRS Journal of Photogrammetry and Remote Sensing. 2020; 169 ():337-350.
Chicago/Turabian StyleXiaoman Qi; Panpan Zhu; Yuebin Wang; Liqiang Zhang; Junhuan Peng; Mengfan Wu; Jialong Chen; Xudong Zhao; Ning Zang; P. Takis Mathiopoulos. 2020. "MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding." ISPRS Journal of Photogrammetry and Remote Sensing 169, no. : 337-350.
Subspace learning (SL) plays an essential role in hyperspectral image (HSI) classification since it can provide an effective solution to reduce the redundant information in the image pixels of HSIs. Previous works about SL aim to improve the accuracy of HSI recognition. Using a large number of labeled samples, related methods can train the parameters of the proposed solutions to obtain better representations of HSI pixels. However, the data instances may not be sufficient to learn a precise model for HSI classification in real applications. Moreover, it is well known that it takes much time, labor, and human expertise to label HSI images. To avoid the abovementioned problems, a novel SL method that includes the probability assumption called SL with the conditional random field (SLCRF) is developed. In SLCRF, the 3-D convolutional autoencoder (3DCAE) is first introduced to remove the redundant information in HSI pixels. Besides, the relationships are also constructed using spectral-spatial information among the adjacent pixels. Then, the conditional random field (CRF) framework can be constructed and further embedded into the HSI SL procedure with the semisupervised approach. Through the linearized alternating direction method termed LADMAP, the objective function of SLCRF is optimized using a defined iterative algorithm. The proposed method is comprehensively evaluated using the challenging public HSI data sets. We can achieve state-of-the-art performance using these HSI sets.
Yun Cao; Jie Mei; Yuebin Wang; Liqiang Zhang; Junhuan Peng; Bing Zhang; Lihua Li; Yibo Zheng. SLCRF: Subspace Learning With Conditional Random Field for Hyperspectral Image Classification. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 4203 -4217.
AMA StyleYun Cao, Jie Mei, Yuebin Wang, Liqiang Zhang, Junhuan Peng, Bing Zhang, Lihua Li, Yibo Zheng. SLCRF: Subspace Learning With Conditional Random Field for Hyperspectral Image Classification. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (5):4203-4217.
Chicago/Turabian StyleYun Cao; Jie Mei; Yuebin Wang; Liqiang Zhang; Junhuan Peng; Bing Zhang; Lihua Li; Yibo Zheng. 2020. "SLCRF: Subspace Learning With Conditional Random Field for Hyperspectral Image Classification." IEEE Transactions on Geoscience and Remote Sensing 59, no. 5: 4203-4217.
With the increasing needs of and rapid developments in digital and/or smart cities, an effective method for the management of various types of complex and massive buildings for the creation of a three-dimensional (3D) photorealistic city has become crucial for high-quality “digital/smart city” construction. To this end, this paper proposes a data model called the “SCSG-OSM”, which combines spatial constructive solid geometry (SCSG) and the object-based spatial model (OSM). The SCSG-OSM assumes an object can be represented by four element types: point, line, face and body. The SCSG applies an extension of the dimensionally extended nine-intersection model (DE-9IM) to describe the spatial topological relationship. The OSM consists of nine definitions that define the points, lines, faces, and bodies of an object and applies point sets instead of node sets and face sets instead of surface sets to model buildings. Two data sets, depicting Denver, Colorado, USA, and Zurich, Switzerland, are employed to assess the storage space and the time consumption averages during the modelling of 3D buildings using the proposed data model. The experimental results demonstrate that the proposed SCSG-OSM is able to save storage space and reduce the computational time compared with the CSG-BR model, TIN model, POLYGON model, Patch model, SSM and CityGML model. Therefore, it can be concluded that the method proposed in this paper is a simple and effective method for the 3D photorealistic visualization of a large city.
Guoqing Zhou; Tao Yue; Yu Huang; Bo Song; Kunshan Chen; Hongchang He; Jinsheng Ni; Liqiang Zhang; Qiuyu Pan. Study of an SCSG-OSM for the Creation of an Urban Three-Dimensional Building. IEEE Access 2020, 8, 126266 -126283.
AMA StyleGuoqing Zhou, Tao Yue, Yu Huang, Bo Song, Kunshan Chen, Hongchang He, Jinsheng Ni, Liqiang Zhang, Qiuyu Pan. Study of an SCSG-OSM for the Creation of an Urban Three-Dimensional Building. IEEE Access. 2020; 8 ():126266-126283.
Chicago/Turabian StyleGuoqing Zhou; Tao Yue; Yu Huang; Bo Song; Kunshan Chen; Hongchang He; Jinsheng Ni; Liqiang Zhang; Qiuyu Pan. 2020. "Study of an SCSG-OSM for the Creation of an Urban Three-Dimensional Building." IEEE Access 8, no. : 126266-126283.
Classification of airborne laser scanning (ALS) point clouds is needed in digital cities and 3-D modeling. To efficiently recognize objects in ALS point clouds, we propose a novel hierarchical aggregated deep feature representation method, which can adequately employ spatial association of multilevel structures and deep feature discrimination. In our method, a 3-D deep learning model is constructed to represent the discriminative feature of each point cluster in a hierarchical structure by decreasing the within-class distance and increasing the between-class distance. Our method aggregates the discriminative deep features in different levels into a hierarchical aggregated deep feature that considers the spatial hierarchy and feature distinctiveness. Lastly, we build a multichannel 1-D convolutional neural network to classify the unknown points. Our tests demonstrate that the proposed hierarchical aggregated deep feature method can enhance point cloud classification results. Comparing with seven state-of-the-art methods, those results also verified the superior performance of our method.
Zhenxin Zhang; Lan Sun; Ruofei Zhong; Dong Chen; Liqiang Zhang; Xiaojuan Li; Qiang Wang; Siyun Chen. Hierarchical Aggregated Deep Features for ALS Point Cloud Classification. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 1686 -1699.
AMA StyleZhenxin Zhang, Lan Sun, Ruofei Zhong, Dong Chen, Liqiang Zhang, Xiaojuan Li, Qiang Wang, Siyun Chen. Hierarchical Aggregated Deep Features for ALS Point Cloud Classification. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (2):1686-1699.
Chicago/Turabian StyleZhenxin Zhang; Lan Sun; Ruofei Zhong; Dong Chen; Liqiang Zhang; Xiaojuan Li; Qiang Wang; Siyun Chen. 2020. "Hierarchical Aggregated Deep Features for ALS Point Cloud Classification." IEEE Transactions on Geoscience and Remote Sensing 59, no. 2: 1686-1699.
Accurate and efficient extraction of road marking plays an important role in road transportation engineering, automotive vision, and automatic driving. In this article, we proposed a dense feature pyramid network (DFPN)-based deep learning model, by considering the particularity and complexity of road marking. The DFPN concatenated its shallow feature channels with deep feature channels so that the shallow feature maps with high resolution and abundant image details can utilize the deep features. Thus, the DFPN can learn hierarchical deep detailed features. The designed deep learning model was trained end to end for road marking instance extraction with mobile laser scanning (MLS) point clouds. Then, we introduced the focal loss function into the optimization of deep learning model in road marking segmentation part, to pay more attention to the hard-classified samples with a large extent of background. In the experiments, our method can achieve better results than state-of-the-art methods on instance segmentation of road markings, which illustrated the advantage of the proposed method.
Siyun Chen; Zhenxin Zhang; Ruofei Zhong; Liqiang Zhang; Hao Ma; Lirong Liu. A Dense Feature Pyramid Network-Based Deep Learning Model for Road Marking Instance Segmentation Using MLS Point Clouds. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 784 -800.
AMA StyleSiyun Chen, Zhenxin Zhang, Ruofei Zhong, Liqiang Zhang, Hao Ma, Lirong Liu. A Dense Feature Pyramid Network-Based Deep Learning Model for Road Marking Instance Segmentation Using MLS Point Clouds. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (1):784-800.
Chicago/Turabian StyleSiyun Chen; Zhenxin Zhang; Ruofei Zhong; Liqiang Zhang; Hao Ma; Lirong Liu. 2020. "A Dense Feature Pyramid Network-Based Deep Learning Model for Road Marking Instance Segmentation Using MLS Point Clouds." IEEE Transactions on Geoscience and Remote Sensing 59, no. 1: 784-800.
With a small number of labeled samples for training, it can save considerable manpower and material resources, especially when the amount of high spatial resolution remote sensing images (HSR-RSIs) increases considerably. However, many deep models face the problem of overfitting when using a small number of labeled samples. This might degrade HSR-RSI retrieval accuracy. Aiming at obtaining more accurate HSR-RSI retrieval performance with small training samples, we develop a deep metric learning approach with generative adversarial network regularization (DML-GANR) for HSR-RSI retrieval. The DML-GANR starts from a high-level feature extraction (HFE) to extract high-level features, which includes convolutional layers and fully connected (FC) layers. Each of the FC layers is constructed by deep metric learning (DML) to maximize the interclass variations and minimize the intraclass variations. The generative adversarial network (GAN) is adopted to mitigate the overfitting problem and validate the qualities of extracted high-level features. DML-GANR is optimized through a customized approach, and the optimal parameters are obtained. The experimental results on the three data sets demonstrate the superior performance of DML-GANR over state-of-the-art techniques in HSR-RSI retrieval.
Yun Cao; Yuebin Wang; Junhuan Peng; Liqiang Zhang; Linlin Xu; Kai Yan; Lihua Li. DML-GANR: Deep Metric Learning With Generative Adversarial Network Regularization for High Spatial Resolution Remote Sensing Image Retrieval. IEEE Transactions on Geoscience and Remote Sensing 2020, 58, 8888 -8904.
AMA StyleYun Cao, Yuebin Wang, Junhuan Peng, Liqiang Zhang, Linlin Xu, Kai Yan, Lihua Li. DML-GANR: Deep Metric Learning With Generative Adversarial Network Regularization for High Spatial Resolution Remote Sensing Image Retrieval. IEEE Transactions on Geoscience and Remote Sensing. 2020; 58 (12):8888-8904.
Chicago/Turabian StyleYun Cao; Yuebin Wang; Junhuan Peng; Liqiang Zhang; Linlin Xu; Kai Yan; Lihua Li. 2020. "DML-GANR: Deep Metric Learning With Generative Adversarial Network Regularization for High Spatial Resolution Remote Sensing Image Retrieval." IEEE Transactions on Geoscience and Remote Sensing 58, no. 12: 8888-8904.
Urban road extraction has wide applications in public transportation systems and unmanned vehicle navigation. The high-resolution remote sensing images contain background clutter and the roads have large appearance differences and complex connectivities, which makes it a very challenging task for road extraction. In this article, we propose a novel end-to-end deep learning model for road area extraction from remote sensing images. Road features are learned from three levels, which can remove the distraction of the background and enhance feature representation. A direction-aware attention block is introduced to the deep learning model for keeping road topologies. We compare our method on public remote sensing data sets with other related methods. The experimental results show the superiority of our method in terms of road extraction and connectivity preservation.
Xingang Li; Yuebin Wang; Liqiang Zhang; Suhong Liu; Jie Mei; Yang Li. Topology-Enhanced Urban Road Extraction via a Geographic Feature-Enhanced Network. IEEE Transactions on Geoscience and Remote Sensing 2020, 58, 8819 -8830.
AMA StyleXingang Li, Yuebin Wang, Liqiang Zhang, Suhong Liu, Jie Mei, Yang Li. Topology-Enhanced Urban Road Extraction via a Geographic Feature-Enhanced Network. IEEE Transactions on Geoscience and Remote Sensing. 2020; 58 (12):8819-8830.
Chicago/Turabian StyleXingang Li; Yuebin Wang; Liqiang Zhang; Suhong Liu; Jie Mei; Yang Li. 2020. "Topology-Enhanced Urban Road Extraction via a Geographic Feature-Enhanced Network." IEEE Transactions on Geoscience and Remote Sensing 58, no. 12: 8819-8830.
Due to the large intraclass variances and complicated object distribution, recognizing objects with complex appearances and arbitrary orientations has been an active research topic and a challenging task in remote sensing fields. In this article, we formulate object recognition as a high-level feature-learning problem, and a novel supervised method is proposed to learn high-level feature representations from high-resolution remote sensing images for object recognition. Our method simultaneously and coherently achieves high-level feature learning and classifier training, which improves the recognition performance. Two constraints that enforce the label consistencies of group images and label consistencies of single images are introduced in a deep learning framework to obtain the high-level feature space. The high-level feature and a multiclass linear classifier are finally learned by an effective optimization algorithm. Experimental results demonstrate the superior performance of the proposed method over many state-of-the-art techniques in object recognition.
Yuebin Wang; Xun Zhou; Honglei Yang; Liqiang Zhang; Suhong Liu; Faqiang Wang; Xingang Li; P. Takis Mathiopoulos. Supervised High-Level Feature Learning With Label Consistencies for Object Recognition. IEEE Transactions on Geoscience and Remote Sensing 2020, 58, 4501 -4516.
AMA StyleYuebin Wang, Xun Zhou, Honglei Yang, Liqiang Zhang, Suhong Liu, Faqiang Wang, Xingang Li, P. Takis Mathiopoulos. Supervised High-Level Feature Learning With Label Consistencies for Object Recognition. IEEE Transactions on Geoscience and Remote Sensing. 2020; 58 (7):4501-4516.
Chicago/Turabian StyleYuebin Wang; Xun Zhou; Honglei Yang; Liqiang Zhang; Suhong Liu; Faqiang Wang; Xingang Li; P. Takis Mathiopoulos. 2020. "Supervised High-Level Feature Learning With Label Consistencies for Object Recognition." IEEE Transactions on Geoscience and Remote Sensing 58, no. 7: 4501-4516.
Multilabel remote sensing (RS) image annotation is a challenging and time-consuming task that requires a considerable amount of expert knowledge. Most existing RS image annotation methods are based on handcrafted features and require multistage processes that are not sufficiently efficient and effective. An RS image can be assigned with a single label at the scene level to depict the overall understanding of the scene and with multiple labels at the object level to represent the major components. The multiple labels can be used as supervised information for annotation, whereas the single label can be used as additional information to exploit the scene-level similarity relationships. By exploiting the dual-level semantic concepts, we propose an end-to-end deep learning framework for object-level multilabel annotation of RS images. The proposed framework consists of a shared convolutional neural network for discriminative feature learning, a classification branch for multilabel annotation and an embedding branch for preserving the scene-level similarity relationships. In the classification branch, an attention mechanism is introduced to generate attention-aware features, and skip-layer connections are incorporated to combine information from multiple layers. The philosophy of the embedding branch is that images with the same scene-level semantic concepts should have similar visual representations. The proposed method adopts the binary cross-entropy loss for classification and the triplet loss for image embedding learning. The evaluations on three multilabel RS image data sets demonstrate the effectiveness and superiority of the proposed method in comparison with the state-of-the-art methods.
Panpan Zhu; Yumin Tan; Liqiang Zhang; Yuebin Wang; Jie Mei; Hao Liu; Mengfan Wu. Deep Learning for Multilabel Remote Sensing Image Annotation With Dual-Level Semantic Concepts. IEEE Transactions on Geoscience and Remote Sensing 2020, 58, 4047 -4060.
AMA StylePanpan Zhu, Yumin Tan, Liqiang Zhang, Yuebin Wang, Jie Mei, Hao Liu, Mengfan Wu. Deep Learning for Multilabel Remote Sensing Image Annotation With Dual-Level Semantic Concepts. IEEE Transactions on Geoscience and Remote Sensing. 2020; 58 (6):4047-4060.
Chicago/Turabian StylePanpan Zhu; Yumin Tan; Liqiang Zhang; Yuebin Wang; Jie Mei; Hao Liu; Mengfan Wu. 2020. "Deep Learning for Multilabel Remote Sensing Image Annotation With Dual-Level Semantic Concepts." IEEE Transactions on Geoscience and Remote Sensing 58, no. 6: 4047-4060.
Li Liu; Yuebin Wang; Junhuan Peng; Liqiang Zhang; Bing Zhang; Yun Cao. Latent Relationship Guided Stacked Sparse Autoencoder for Hyperspectral Imagery Classification. IEEE Transactions on Geoscience and Remote Sensing 2020, 58, 3711 -3725.
AMA StyleLi Liu, Yuebin Wang, Junhuan Peng, Liqiang Zhang, Bing Zhang, Yun Cao. Latent Relationship Guided Stacked Sparse Autoencoder for Hyperspectral Imagery Classification. IEEE Transactions on Geoscience and Remote Sensing. 2020; 58 (5):3711-3725.
Chicago/Turabian StyleLi Liu; Yuebin Wang; Junhuan Peng; Liqiang Zhang; Bing Zhang; Yun Cao. 2020. "Latent Relationship Guided Stacked Sparse Autoencoder for Hyperspectral Imagery Classification." IEEE Transactions on Geoscience and Remote Sensing 58, no. 5: 3711-3725.
Image hashing has been widely used in image retrieval tasks. Many existing methods generate hashing codes based on image feature representations. They rarely consider the rich information such as image clustering information contained in the image set as well as uncertain relationships between images and tags simultaneously. In this paper, we develop a Weighted Generative Adversarial Networks (WeGAN) to transfer the clustering information of images to construct the hashing code. WeGAN consists three modules: 1) a hashing learning process for transferring knowledge of the image set to hashing codes of single images; 2) by means of hashing codes, a module to generate image content, tag representation, and their joint information which reflects the correlation between the image and the corresponding tags; 3) a discriminator to distinguish the generated data from the original source, and then formulating three loss functions. Different weights are assigned to these loss functions in order to deal with the uncertainties between images and tags. Through introducing the image set to process the image hashing with different tags, WeGAN can naturally provide the information of clustering results, which is useful for self-supervision of image hashing with multi-tags. The generated hashing code has the ability to dynamically process the uncertain relationships between images and tags. Experiments on three challenging datasets show that WeGAN outperforms the state-of-the-art methods.
Yuebin Wang; Liqiang Zhang; Feiping Nie; Xingang Li; Zhijun Chen; Faqiang Wang. WeGAN: Deep Image Hashing With Weighted Generative Adversarial Networks. IEEE Transactions on Multimedia 2019, 22, 1458 -1469.
AMA StyleYuebin Wang, Liqiang Zhang, Feiping Nie, Xingang Li, Zhijun Chen, Faqiang Wang. WeGAN: Deep Image Hashing With Weighted Generative Adversarial Networks. IEEE Transactions on Multimedia. 2019; 22 (6):1458-1469.
Chicago/Turabian StyleYuebin Wang; Liqiang Zhang; Feiping Nie; Xingang Li; Zhijun Chen; Faqiang Wang. 2019. "WeGAN: Deep Image Hashing With Weighted Generative Adversarial Networks." IEEE Transactions on Multimedia 22, no. 6: 1458-1469.
Liqiang Zhang; Weiwei Liu; Kun Hou; Jintai Lin; Chenghu Zhou; Xiaohua Tong; Ziye Wang; Yuebin Wang; Yanxiao Jiang; Ziwei Wang; Yibo Zheng; Yonglian Lan; Suhong Liu; Ruijing Ni; Mengyao Liu; Panpan Zhu. Air pollution-induced missed abortion risk for pregnancies. Nature Sustainability 2019, 2, 1011 -1017.
AMA StyleLiqiang Zhang, Weiwei Liu, Kun Hou, Jintai Lin, Chenghu Zhou, Xiaohua Tong, Ziye Wang, Yuebin Wang, Yanxiao Jiang, Ziwei Wang, Yibo Zheng, Yonglian Lan, Suhong Liu, Ruijing Ni, Mengyao Liu, Panpan Zhu. Air pollution-induced missed abortion risk for pregnancies. Nature Sustainability. 2019; 2 (11):1011-1017.
Chicago/Turabian StyleLiqiang Zhang; Weiwei Liu; Kun Hou; Jintai Lin; Chenghu Zhou; Xiaohua Tong; Ziye Wang; Yuebin Wang; Yanxiao Jiang; Ziwei Wang; Yibo Zheng; Yonglian Lan; Suhong Liu; Ruijing Ni; Mengyao Liu; Panpan Zhu. 2019. "Air pollution-induced missed abortion risk for pregnancies." Nature Sustainability 2, no. 11: 1011-1017.
Clinical experience suggests increased incidences of neonatal jaundice when air quality worsens, yet no studies have quantified this relationship. Here we reports investigations in 25,782 newborns showing an increase in newborn’s bilirubin levels, the indicator of neonatal jaundice risk, by 0.076 (95% CI: 0.027–0.125), 0.029 (0.014–0.044) and 0.009 (95% CI: 0.002–0.016) mg/dL per μg/m3 for PM2.5 exposure in the concentration ranges of 10–35, 35–75 and 75–200 μg/m3, respectively. The response is 0.094 (0.077–0.111) and 0.161 (0.07–0.252) mg/dL per μg/m3 for SO2 exposure at 10–15 and above 15 μg/m3, respectively, and 0.351 (0.314–0.388) mg/dL per mg/m3 for CO exposure. Bilirubin levels increase linearly with exposure time between 0 and 48 h. Positive relationship between maternal exposure and newborn bilirubin level is also quantitated. The jaundice−pollution relationship is not affected by top-of-atmosphere incident solar irradiance and atmospheric visibility. Improving air quality may therefore be key to lowering the neonatal jaundice risk. Air pollution has become a major health risk in China. Here Zhang et al. report that maternal and neonatal exposure to particulate matter increases the risk of neonatal jaundice based on the study of 25,782 newborns born in China between 2014 and 2017.
Liqiang Zhang; Weiwei Liu; Kun Hou; Jintai Lin; Changqing Song; Chenghu Zhou; Bo Huang; Xiaohua Tong; Jinfeng Wang; William Rhine; Ying Jiao; Ziwei Wang; Ruijing Ni; Mengyao Liu; Ziye Wang; Yuebin Wang; Xingang Li; Suhong Liu; Yanhong Wang. Air pollution exposure associates with increased risk of neonatal jaundice. Nature Communications 2019, 10, 1 -9.
AMA StyleLiqiang Zhang, Weiwei Liu, Kun Hou, Jintai Lin, Changqing Song, Chenghu Zhou, Bo Huang, Xiaohua Tong, Jinfeng Wang, William Rhine, Ying Jiao, Ziwei Wang, Ruijing Ni, Mengyao Liu, Ziye Wang, Yuebin Wang, Xingang Li, Suhong Liu, Yanhong Wang. Air pollution exposure associates with increased risk of neonatal jaundice. Nature Communications. 2019; 10 (1):1-9.
Chicago/Turabian StyleLiqiang Zhang; Weiwei Liu; Kun Hou; Jintai Lin; Changqing Song; Chenghu Zhou; Bo Huang; Xiaohua Tong; Jinfeng Wang; William Rhine; Ying Jiao; Ziwei Wang; Ruijing Ni; Mengyao Liu; Ziye Wang; Yuebin Wang; Xingang Li; Suhong Liu; Yanhong Wang. 2019. "Air pollution exposure associates with increased risk of neonatal jaundice." Nature Communications 10, no. 1: 1-9.
Standard 3D convolution operations usually require larger amounts of memory and computation cost than 2D convolution operations. The fact increases the difficulty of the development of deep neural nets in many 3D vision tasks. In this paper, we investigate the possibility of applying depthwise separable convolutions in 3D scenario and introduce the use of 3D depthwise convolution. A 3D depthwise convolution splits a single standard 3D convolution into two separate steps, which would drastically reduce the number of parameters in 3D convolutions with more than one order of magnitude. We experiment with 3D depthwise convolution on popular CNN architectures and also compare it with a similar structure called pseudo-3D convolution. The results demonstrate that, with 3D depthwise convolutions, 3D vision tasks like classification and reconstruction can be carried out with more light-weighted neural networks while still delivering comparable performances.
Rongtian Ye; Fangyu Liu; Liqiang Zhang. 3D Depthwise Convolution: Reducing Model Parameters in 3D Vision Tasks. Knowledge-Based and Intelligent Information and Engineering Systems 2019, 186 -199.
AMA StyleRongtian Ye, Fangyu Liu, Liqiang Zhang. 3D Depthwise Convolution: Reducing Model Parameters in 3D Vision Tasks. Knowledge-Based and Intelligent Information and Engineering Systems. 2019; ():186-199.
Chicago/Turabian StyleRongtian Ye; Fangyu Liu; Liqiang Zhang. 2019. "3D Depthwise Convolution: Reducing Model Parameters in 3D Vision Tasks." Knowledge-Based and Intelligent Information and Engineering Systems , no. : 186-199.
Panoramic images have a wide range of applications in many fields with their ability to perceive all-round information. Object detection based on panoramic images has certain advantages in terms of environment perception due to the characteristics of panoramic images, e.g., lager perspective. In recent years, deep learning methods have achieved remarkable results in image classification and object detection. Their performance depends on the large amount of training data. Therefore, a good training dataset is a prerequisite for the methods to achieve better recognition results. Then, we construct a benchmark named Pano-RSOD for panoramic road scene object detection. Pano-RSOD contains vehicles, pedestrians, traffic signs and guiding arrows. The objects of Pano-RSOD are labelled by bounding boxes in the images. Different from traditional object detection datasets, Pano-RSOD contains more objects in a panoramic image, and the high-resolution images have 360-degree environmental perception, more annotations, more small objects and diverse road scenes. The state-of-the-art deep learning algorithms are trained on Pano-RSOD for object detection, which demonstrates that Pano-RSOD is a useful benchmark, and it provides a better panoramic image training dataset for object detection tasks, especially for small and deformed objects.
Yong Li; Guofeng Tong; Huashuai Gao; Yuebin Wang; Liqiang Zhang; Huairong Chen. Pano-RSOD: A Dataset and Benchmark for Panoramic Road Scene Object Detection. Electronics 2019, 8, 329 .
AMA StyleYong Li, Guofeng Tong, Huashuai Gao, Yuebin Wang, Liqiang Zhang, Huairong Chen. Pano-RSOD: A Dataset and Benchmark for Panoramic Road Scene Object Detection. Electronics. 2019; 8 (3):329.
Chicago/Turabian StyleYong Li; Guofeng Tong; Huashuai Gao; Yuebin Wang; Liqiang Zhang; Huairong Chen. 2019. "Pano-RSOD: A Dataset and Benchmark for Panoramic Road Scene Object Detection." Electronics 8, no. 3: 329.
This paper presents a novel framework to extract metro tunnel cross sections (profiles) from Terrestrial Laser Scanning point clouds. The entire framework consists of two steps: tunnel central axis extraction and cross section determination. In tunnel central extraction, we propose a slice-based method to obtain an initial central axis, which is further divided into linear and nonlinear circular segments by an enhanced Random Sample Consensus (RANSAC) tunnel axis segmentation algorithm. This algorithm transforms the problem of hybrid linear and nonlinear segment extraction into a sole segmentation of linear elements defined at the tangent space rather than raw data space, significantly simplifying the tunnel axis segmentation. The extracted axis segments are then provided as input to the step of the cross section determination which generates the coarse cross-sectional points by intersecting a series of straight lines that rotate orthogonally around the tunnel axis with their local fitted quadric surface, i.e., cylindrical surface. These generated profile points are further refined and densified via solving a constrained nonlinear least squares problem. Our experiments on Nanjing metro tunnel show that the cross sectional fitting error is only 1.69 mm. Compared with the designed radius of the metro tunnel, the RMSE (Root Mean Square Error) of extracted cross sections’ radii only keeps 1.60 mm. We also test our algorithm on another metro tunnel in Shanghai, and the results show that the RMSE of radii only keeps 4.60 mm which is superior to a state-of-the-art method of 6.00 mm. Apart from the accurate geometry, our approach can maintain the correct topology among cross sections, thereby guaranteeing the production of geometric tunnel model without crack defects. Moreover, we prove that our algorithm is insensitive to the missing data and point density.
Zhen Cao; Dong Chen; Yufeng Shi; Zhenxin Zhang; Fengxiang Jin; Ting Yun; Sheng Xu; Zhizhong Kang; Liqiang Zhang. A Flexible Architecture for Extracting Metro Tunnel Cross Sections from Terrestrial Laser Scanning Point Clouds. Remote Sensing 2019, 11, 297 .
AMA StyleZhen Cao, Dong Chen, Yufeng Shi, Zhenxin Zhang, Fengxiang Jin, Ting Yun, Sheng Xu, Zhizhong Kang, Liqiang Zhang. A Flexible Architecture for Extracting Metro Tunnel Cross Sections from Terrestrial Laser Scanning Point Clouds. Remote Sensing. 2019; 11 (3):297.
Chicago/Turabian StyleZhen Cao; Dong Chen; Yufeng Shi; Zhenxin Zhang; Fengxiang Jin; Ting Yun; Sheng Xu; Zhizhong Kang; Liqiang Zhang. 2019. "A Flexible Architecture for Extracting Metro Tunnel Cross Sections from Terrestrial Laser Scanning Point Clouds." Remote Sensing 11, no. 3: 297.