This page has only limited features, please log in for full access.
Optical remote sensing imagery is commonly used to monitor the spatial and temporal distribution patterns of inland waters. Its usage, however, is limited by cloud contamination, which results in low-quality images or missing values. Selecting cloud-free scenes or combining multi-temporal images to produce a cloud-free composite image can partially overcome this problem at the cost of the monitoring frequency. Predicting the spectral values of cloudy areas based on the spectral characteristics is a possible solution; however, this is not appropriate for water because it changes rapidly. Reconstructing cloud-covered water areas using historical water-distribution data has good performance, but such methods are typically only suitable for lakes and reservoirs, not over vast and complex terrain. This paper proposes a category-based approach to reconstruct the water distribution in cloud-contaminated images using a spatiotemporal dependence model. The proposed method predicts the class label (water or land) of a cloudy pixel based on the neighboring pixel labels and those at the same position in images acquired on other dates according to historical spatiotemporal water-distribution data. The method was evaluated through eight experiments in different study regions using Landsat and Sentinel-2 images. The results demonstrated that the proposed method could yield high-quality cloud-free classification maps and provide good water-extraction accuracy and consistency in most hydrological conditions, with an overall accuracy of up to 98%. The accuracy and practicality of the method render it promising for applications across a wide range of future research and monitoring efforts.
Xinyan Li; Feng Ling; Xiaobin Cai; Yong Ge; Xiaodong Li; Zhixiang Yin; Cheng Shang; Xiaofeng Jia; Yun Du. Mapping water bodies under cloud cover using remotely sensed optical images and a spatiotemporal dependence model. International Journal of Applied Earth Observation and Geoinformation 2021, 103, 102470 .
AMA StyleXinyan Li, Feng Ling, Xiaobin Cai, Yong Ge, Xiaodong Li, Zhixiang Yin, Cheng Shang, Xiaofeng Jia, Yun Du. Mapping water bodies under cloud cover using remotely sensed optical images and a spatiotemporal dependence model. International Journal of Applied Earth Observation and Geoinformation. 2021; 103 ():102470.
Chicago/Turabian StyleXinyan Li; Feng Ling; Xiaobin Cai; Yong Ge; Xiaodong Li; Zhixiang Yin; Cheng Shang; Xiaofeng Jia; Yun Du. 2021. "Mapping water bodies under cloud cover using remotely sensed optical images and a spatiotemporal dependence model." International Journal of Applied Earth Observation and Geoinformation 103, no. : 102470.
Super-resolution mapping (SRM) is an effective technology to solve the problem of mixed pixels because it can be used to generate fine-resolution land cover maps from coarse-resolution remote sensing images. Current methods based on deep neural networks (DNNs) have been successfully applied to SRM, as they can learn complex spatial patterns from training data. However, they lack the ability to learn structural information between adjacent land cover classes, which is vital in the reconstruction of spatial distribution. In this article, an SRM method based on graph convolutional networks (GCNs), named SRMGCN, is proposed to improve SRM results by capturing structure information on the graph. In SRMGCN, a supervised inductive learning strategy with mini-graphs as input is considered, which is an extension of the graph convolutional network (GCN) framework. Furthermore, two operations are designed in terms of adjacency matrix construction and an information propagation rule to help reconstruct detailed information of geographical objects. Experiments on three datasets with different spatial resolutions demonstrate the qualitative and quantitative superiority of SRMGCN over three other popular SRM methods.
Xining Zhang; Yong Ge; Feng Ling; Jin Chen; Yuehong Chen; Yuanxin Jia. Graph Convolutional Networks-based Super-resolution Land Cover Mapping. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, PP, 1 -1.
AMA StyleXining Zhang, Yong Ge, Feng Ling, Jin Chen, Yuehong Chen, Yuanxin Jia. Graph Convolutional Networks-based Super-resolution Land Cover Mapping. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021; PP (99):1-1.
Chicago/Turabian StyleXining Zhang; Yong Ge; Feng Ling; Jin Chen; Yuehong Chen; Yuanxin Jia. 2021. "Graph Convolutional Networks-based Super-resolution Land Cover Mapping." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing PP, no. 99: 1-1.
The surface urban heat island (SUHI) effect poses a significant threat to the urban environment and public health. This paper utilized the Local Climate Zone (LCZ) classification and land surface temperature (LST) data to analyze the seasonal dynamics of SUHI in Wuhan based on the Google Earth Engine platform. In addition, the SUHI intensity derived from the traditional urban–rural dichotomy was also calculated for comparison. Seasonal SUHI analysis showed that (1) both LCZ classification and the urban–rural dichotomy confirmed that Wuhan’s SHUI effect was the strongest in summer, followed by spring, autumn and winter; (2) the maximum SUHI intensity derived from LCZ classification reached 6.53 °C, which indicated that the SUHI effect was very significant in Wuhan; (3) LCZ 8 (i.e., large low-rise) had the maximum LST value and LCZ G (i.e., water) had the minimum LST value in all seasons; (4) the LST values of compact high-rise/midrise/low-rise (i.e., LCZ 1–3) were higher than those of open high-rise/midrise/low-rise (i.e., LCZ 4–6) in all seasons, which indicated that building density had a positive correlation with LST; (5) the LST values of dense trees (i.e., LCZ A) were less than those of scattered trees (i.e., LCZ B) in all seasons, which indicated that vegetation density had a negative correlation with LST. This paper provides some useful information for urban planning and contributes to the healthy and sustainable development of Wuhan.
Lingfei Shi; Feng Ling; Giles Foody; Zhen Yang; Xixi Liu; Yun Du. Seasonal SUHI Analysis Using Local Climate Zone Classification: A Case Study of Wuhan, China. International Journal of Environmental Research and Public Health 2021, 18, 7242 .
AMA StyleLingfei Shi, Feng Ling, Giles Foody, Zhen Yang, Xixi Liu, Yun Du. Seasonal SUHI Analysis Using Local Climate Zone Classification: A Case Study of Wuhan, China. International Journal of Environmental Research and Public Health. 2021; 18 (14):7242.
Chicago/Turabian StyleLingfei Shi; Feng Ling; Giles Foody; Zhen Yang; Xixi Liu; Yun Du. 2021. "Seasonal SUHI Analysis Using Local Climate Zone Classification: A Case Study of Wuhan, China." International Journal of Environmental Research and Public Health 18, no. 14: 7242.
The turbulent heat flux (THF) over leads is an important parameter for climate change monitoring in the Arctic region. THF over leads is often calculated from satellite-derived ice surface temperature (IST) products, in which mixed pixels containing both ice and open water along lead boundaries reduce the accuracy of calculated THF. To address this problem, this paper proposes a deep residual convolutional neural network (CNN)-based framework to estimate THF over leads at the subpixel scale (DeepSTHF) based on remotely sensed images. The proposed DeepSTHF provides an IST image and the corresponding lead map with a finer spatial resolution than the input IST image so that the subpixel-scale THF can be estimated from them. The proposed approach is verified using simulated and real Moderate Resolution Imaging Spectroradiometer images and compared with the conventional cubic interpolation and pixel-based methods. The results demonstrate that the proposed CNN-based method can effectively estimate subpixel-scale information from the coarse data and performs well in producing fine-spatial-resolution IST images and lead maps, thereby providing more accurate and reliable THF over leads.
Zhixiang Yin; Xiaodong Li; Yong Ge; Cheng Shang; Xinyan Li; Yun Du; Feng Ling. Estimating subpixel turbulent heat flux over leads from MODIS thermal infrared imagery with deep learning. The Cryosphere 2021, 15, 2835 -2856.
AMA StyleZhixiang Yin, Xiaodong Li, Yong Ge, Cheng Shang, Xinyan Li, Yun Du, Feng Ling. Estimating subpixel turbulent heat flux over leads from MODIS thermal infrared imagery with deep learning. The Cryosphere. 2021; 15 (6):2835-2856.
Chicago/Turabian StyleZhixiang Yin; Xiaodong Li; Yong Ge; Cheng Shang; Xinyan Li; Yun Du; Feng Ling. 2021. "Estimating subpixel turbulent heat flux over leads from MODIS thermal infrared imagery with deep learning." The Cryosphere 15, no. 6: 2835-2856.
The monitoring of impervious surfaces in urban areas using remote sensing with fine spatial and temporal resolutions is crucial for monitoring urban development and environmental changes in urban areas. Spatiotemporal super-resolution mapping (STSRM) fuses fine-spatial-coarse-temporal remote sensing data with coarse-spatial-fine-temporal data, allowing for urban impervious surface mapping at both fine-spatial and fine-temporal resolutions. The STSRM involves two main steps: unmixing the coarse-spatial-fine-temporal remote sensing data to class fraction images, and downscaling the fraction images to sub-pixel land cover maps. Yet, challenges exist in each step when applying STSRM in mapping impervious surfaces. First, the impervious surfaces have high spectral variability (i.e., high intra-class and low inter-class variability), which impacts the accurate extraction of sub-pixel scale impervious surface fractions. Second, downscaling the fraction images to sub-pixel land cover maps is an ill-posed problem and would bring great uncertainty and error in the predictions. This paper proposed a new Spatiotemporal Continuous Impervious Surface Mapping (STCISM) method to deal with these challenges in fusing Landsat and Google Earth imagery. The STCISM used the Multiple Endmember Spectral Mixture Analysis and the Fisher Discriminant Analysis to minimize the within-class variability and maximize the between-class variability to reduce the spectral unmixing uncertainty. In addition, the STCISM adopted a new temporal consistency check model to incorporate temporal contextual information to reduce the uncertainty in the time-series impervious surface prediction maps. Unlike the traditional temporal consistency check model that assumed the impervious-to-pervious conversion is unlikely to happen, the new model allowed the bidirectional conversions between pervious and impervious surfaces. The temporal consistency check was used as a post-procession method to correct the errors in the prediction maps. The proposed STCISM method was used to predict time-series impervious surface maps at 5 m resolution of Google Earth image at the Landsat frequency. The results showed that the proposed STCISM outperformed the STSRM model without using the temporal consistency check and the STSRM model using the temporal consistency check based on the unidirectional pervious-to-impervious surface conversion rule.
Rui Chen; Xiaodong Li; Yihang Zhang; Pu Zhou; Yalan Wang; Lingfei Shi; Lai Jiang; Feng Ling; Yun Du. Spatiotemporal Continuous Impervious Surface Mapping by Fusion of Landsat Time Series Data and Google Earth Imagery. Remote Sensing 2021, 13, 2409 .
AMA StyleRui Chen, Xiaodong Li, Yihang Zhang, Pu Zhou, Yalan Wang, Lingfei Shi, Lai Jiang, Feng Ling, Yun Du. Spatiotemporal Continuous Impervious Surface Mapping by Fusion of Landsat Time Series Data and Google Earth Imagery. Remote Sensing. 2021; 13 (12):2409.
Chicago/Turabian StyleRui Chen; Xiaodong Li; Yihang Zhang; Pu Zhou; Yalan Wang; Lingfei Shi; Lai Jiang; Feng Ling; Yun Du. 2021. "Spatiotemporal Continuous Impervious Surface Mapping by Fusion of Landsat Time Series Data and Google Earth Imagery." Remote Sensing 13, no. 12: 2409.
This article provides an example of the ways in which remote sensing, Earth observation, and machine learning can be deployed to provide the most up to date quantitative portrait of the South Asian ‘Brick Belt’, with a view to understanding the extent of the prevalence of modern slavery and exploitative labour. This analysis represents the first of its kind in estimating the spatiotemporal patterns in the Bull’s Trench Kilns across the Brick Belt, as well as its connections with various UN Sustainable Development Goals (SDGs). With a principal focus on Sustainable Development Goal Target 8.7 regarding the effective measures to end modern slavery by 2030, the article provides additional evidence on the intersections that exist between SDG 8.7 and those relating to urbanisation (SDG 11, 12), environmental degradation and pollution (SDG 3, 14, 15), and climate change (SDG 13). Our findings are then used to make a series of pragmatic suggestions for mitigating the most extreme SDG risks associated with brick production in ways that can improve human lives and human freedom.
Doreen S. Boyd; Bertrand Perrat; Xiaodong Li; Bethany Jackson; Todd Landman; Feng Ling; Kevin Bales; Austin Choi-Fitzpatrick; James Goulding; Stuart Marsh; Giles M. Foody. Informing action for United Nations SDG target 8.7 and interdependent SDGs: Examining modern slavery from space. Humanities and Social Sciences Communications 2021, 8, 1 -14.
AMA StyleDoreen S. Boyd, Bertrand Perrat, Xiaodong Li, Bethany Jackson, Todd Landman, Feng Ling, Kevin Bales, Austin Choi-Fitzpatrick, James Goulding, Stuart Marsh, Giles M. Foody. Informing action for United Nations SDG target 8.7 and interdependent SDGs: Examining modern slavery from space. Humanities and Social Sciences Communications. 2021; 8 (1):1-14.
Chicago/Turabian StyleDoreen S. Boyd; Bertrand Perrat; Xiaodong Li; Bethany Jackson; Todd Landman; Feng Ling; Kevin Bales; Austin Choi-Fitzpatrick; James Goulding; Stuart Marsh; Giles M. Foody. 2021. "Informing action for United Nations SDG target 8.7 and interdependent SDGs: Examining modern slavery from space." Humanities and Social Sciences Communications 8, no. 1: 1-14.
Information on forest disturbance is crucial for tropical forest management and global carbon cycle analysis. The long-term collection of data from the Landsat missions provides some of the most valuable information for understanding the processes of global tropical forest disturbance. However, there are substantial uncertainties in the estimation of non-mechanized, small-scale (i.e., small area) clearings in tropical forests with Landsat series images. Because the appearance of small-scale openings in a tropical tree canopy are often ephemeral due to fast-growing vegetation, and because clouds are frequent in tropical regions, it is challenging for Landsat images to capture the logging signal. Moreover, the spatial resolution of Landsat images is typically too coarse to represent spatial details about small-scale clearings. In this paper, by fusing all available Landsat and Sentinel-2 images, we proposed a method to improve the tracking of small-scale tropical forest disturbance history with both fine spatial and temporal resolutions. First, yearly composited Landsat and Sentinel-2 self-referenced normalized burn ratio (rNBR) vegetation index images were calculated from all available Landsat-7/8 and Sentinel-2 scenes during 2016–2019. Second, a deep-learning based downscaling method was used to predict fine resolution (10 m) rNBR images from the annual coarse resolution (30 m) Landsat rNBR images. Third, given the baseline Landsat forest map in 2015, the generated fine-resolution Landsat rNBR images and original Sentinel-2 rNBR images were fused to produce the 10 m forest disturbance map for the period 2016–2019. From data comparison and evaluation, it was demonstrated that the deep-learning based downscaling method can produce fine-resolution Landsat rNBR images and forest disturbance maps that contain substantial spatial detail. In addition, by fusing downscaled fine-resolution Landsat rNBR images and original Sentinel-2 rNBR images, it was possible to produce state-of-the-art forest disturbance maps with OA values more than 87% and 96% for the small and large study areas, and detected 11% to 21% more disturbed areas than either the Sentinel-2 or Landsat-7/8 time-series alone. We found that 1.42% of the disturbed areas indentified during 2016–2019 experienced multiple forest disturbances. The method has great potential to enhance work undertaken in relation to major policies such as the reducing emissions from deforestation and forest degradation (REDD+) programmes.
Yihang Zhang; Feng Ling; Xia Wang; Giles M. Foody; Doreen S. Boyd; Xiaodong Li; Yun Du; Peter M. Atkinson. Tracking small-scale tropical forest disturbances: Fusing the Landsat and Sentinel-2 data record. Remote Sensing of Environment 2021, 261, 112470 .
AMA StyleYihang Zhang, Feng Ling, Xia Wang, Giles M. Foody, Doreen S. Boyd, Xiaodong Li, Yun Du, Peter M. Atkinson. Tracking small-scale tropical forest disturbances: Fusing the Landsat and Sentinel-2 data record. Remote Sensing of Environment. 2021; 261 ():112470.
Chicago/Turabian StyleYihang Zhang; Feng Ling; Xia Wang; Giles M. Foody; Doreen S. Boyd; Xiaodong Li; Yun Du; Peter M. Atkinson. 2021. "Tracking small-scale tropical forest disturbances: Fusing the Landsat and Sentinel-2 data record." Remote Sensing of Environment 261, no. : 112470.
As one of the widely concerned urban climate issues, urban heat island (UHI) has been studied using the local climate zone (LCZ) classification scheme in recent years. More and more effort has been focused on improving LCZ mapping accuracy. It has become a prevalent trend to take advantage of multi-source images in LCZ mapping. To this end, this paper tried to utilize multi-source freely available datasets: Sentinel-2 multispectral instrument (MSI), Sentinel-1 synthetic aperture radar (SAR), Luojia1-01 nighttime light (NTL), and Open Street Map (OSM) datasets to produce the 10 m LCZ classification result using Google Earth Engine (GEE) platform. Additionally, the derived datasets of Sentinel-2 MSI data were also exploited in LCZ classification, such as spectral indexes (SI) and gray-level co-occurrence matrix (GLCM) datasets. The different dataset combinations were designed to evaluate the particular dataset’s contribution to LCZ classification. It was found that: (1) The synergistic use of Sentinel-2 MSI and Sentinel-1 SAR data can improve the accuracy of LCZ classification; (2) The multi-seasonal information of Sentinel data also has a good contribution to LCZ classification; (3) OSM, GLCM, SI, and NTL datasets have some positive contribution to LCZ classification when individually adding them to the seasonal Sentinel-1 and Sentinel-2 datasets; (4) It is not an absolute right way to improve LCZ classification accuracy by combining as many datasets as possible. With the help of the GEE, this study provides the potential to generate more accurate LCZ mapping on a large scale, which is significant for urban development.
Lingfei Shi; Feng Ling. Local Climate Zone Mapping Using Multi-Source Free Available Datasets on Google Earth Engine Platform. Land 2021, 10, 454 .
AMA StyleLingfei Shi, Feng Ling. Local Climate Zone Mapping Using Multi-Source Free Available Datasets on Google Earth Engine Platform. Land. 2021; 10 (5):454.
Chicago/Turabian StyleLingfei Shi; Feng Ling. 2021. "Local Climate Zone Mapping Using Multi-Source Free Available Datasets on Google Earth Engine Platform." Land 10, no. 5: 454.
Topography and soil factors are known to play crucial roles in the species composition of plant communities in subtropical evergreen-deciduous broadleaved mixed forests. In this study, we used a systematic quantitative approach to classify plant community types in the subtropical forests of Hubei Province (central China), and then quantified the relative contribution of drivers responsible for variation in species composition and diversity. We classified the subtropical forests in the study area into 12 community types. Of these, species diversity indices of three communities were significantly higher than those of others. In each community type, species richness, abundance, basal area and importance values of evergreen and deciduous species were different. In most community types, deciduous species richness was higher than that of evergreen species. Linear regression analysis showed that the dominant factors that affect species composition in each community type are elevation, slope, aspect, soil nitrogen content, and soil phosphorus content. Furthermore, structural equation modeling analysis (SEM) showed that the majority of variance in species composition of plant communities can be explained by elevation, aspect, soil water content, litterfall, total nitrogen, and total phosphorus. Thus, the major factors that affect evergreen and deciduous species distribution across the 12 community types in subtropical evergreen-deciduous broadleaved mixed forests include elevation, slope and aspect, soil total nitrogen content, soil total phosphorus content, soil available nitrogen content and soil available phosphorus content.
Qichi Yang; Hehe Zhang; Lihui Wang; Feng Ling; Zhengxiang Wang; Tingting Li; Jinliang Huang. Topography and soil content contribute to plant community composition and structure in subtropical evergreen-deciduous broadleaved mixed forests. Plant Diversity 2021, 43, 264 -274.
AMA StyleQichi Yang, Hehe Zhang, Lihui Wang, Feng Ling, Zhengxiang Wang, Tingting Li, Jinliang Huang. Topography and soil content contribute to plant community composition and structure in subtropical evergreen-deciduous broadleaved mixed forests. Plant Diversity. 2021; 43 (4):264-274.
Chicago/Turabian StyleQichi Yang; Hehe Zhang; Lihui Wang; Feng Ling; Zhengxiang Wang; Tingting Li; Jinliang Huang. 2021. "Topography and soil content contribute to plant community composition and structure in subtropical evergreen-deciduous broadleaved mixed forests." Plant Diversity 43, no. 4: 264-274.
The spatiotemporal reflectance fusion method is used to blend high-temporal and low-spatial resolution images with their low-temporal and high-spatial resolution counterparts that were previously acquired by various satellite sensors. Recently, a wide variety of learning-based solutions have been developed, but challenges remain. These solutions usually require two sets of data acquired before and after the prediction time, making them unsuitable for near-real-time predicting. The solutions are always trained band by band and thus do not consider the spectral correlation. High-resolution temporal changes are difficult to reconstruct accurately with the network structure used, which lowers the accuracy of the fusion result. To address these problems, this study proposes a novel spatiotemporal adaptive reflectance fusion model using a generative adversarial network (GASTFN). In GASTFN, an end-to-end network, including a generative and discriminative network, is simultaneously trained for all spectral bands. The proposed model can be applied to the one-pair case, consider the spectral correlation of each band, and improve the process of producing super-resolution imagery by adopting the discriminative network for image reflectance values rather than temporal changes in reflectance. The proposed model has been verified with two actual satellite data sets acquired in heterogeneous landscapes and areas with abrupt changes, with a comparison of the state-of-art methods. The results show that GASTFN can generate the most accurate fusion images with more detailed textures, more realistic spatial shapes, and higher accuracy, demonstrating that the GASTFN is effective for predicting near-real-time changes in image reflectance and preserves the most valuable spatial information.
Cheng Shang; Xinyan Li; Zhixiang Yin; Xiaodong Li; Lihui Wang; Yihang Zhang; Yun Du; Feng Ling. Spatiotemporal Reflectance Fusion Using a Generative Adversarial Network. IEEE Transactions on Geoscience and Remote Sensing 2021, PP, 1 -15.
AMA StyleCheng Shang, Xinyan Li, Zhixiang Yin, Xiaodong Li, Lihui Wang, Yihang Zhang, Yun Du, Feng Ling. Spatiotemporal Reflectance Fusion Using a Generative Adversarial Network. IEEE Transactions on Geoscience and Remote Sensing. 2021; PP (99):1-15.
Chicago/Turabian StyleCheng Shang; Xinyan Li; Zhixiang Yin; Xiaodong Li; Lihui Wang; Yihang Zhang; Yun Du; Feng Ling. 2021. "Spatiotemporal Reflectance Fusion Using a Generative Adversarial Network." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-15.
Turbulent heat flux (THF) over leads is an important variable used for monitoring climate change in the Arctic. Presently, THF over leads is often calculated from satellite imagery. The accuracy of the estimated THF is low for mixed pixels that consist of ice and leads, because the mixed pixels along lead boundaries will lower the accuracy of the surface temperature measured over leads and the corresponding lead map. To address this problem, a deep residual convolutional neural network (CNN)-based framework is proposed to estimate THF over leads at the subpixel scale (DeepSTHF) with remotely sensed imagery. The DeepSTHF allows the production of a sea surface temperature (SST) image and a corresponding lead map with a finer spatial resolution than the input SST image using two CNNs, so that the subpixel scale THF can be estimated from them. The proposed approach is assessed using simulated and real MODIS imagery and compared against the conventional bicubic interpolation and pixel-based methods. The results demonstrate that the proposed CNN-based method can effectively estimate subpixel-scale information from the coarse data and performs well in producing fine spatial resolution SST images and lead maps, thereby allowing researchers to obtain more accurate and reliable THF over leads.
Zhixiang Yin; Xiaodong Li; Yong Ge; Cheng Shang; Xinyan Li; Yun Du; Feng Ling. Estimating subpixel turbulent heat flux over leads from MODIS thermal infrared imagery with deep learning. 2021, 2021, 1 -29.
AMA StyleZhixiang Yin, Xiaodong Li, Yong Ge, Cheng Shang, Xinyan Li, Yun Du, Feng Ling. Estimating subpixel turbulent heat flux over leads from MODIS thermal infrared imagery with deep learning. . 2021; 2021 ():1-29.
Chicago/Turabian StyleZhixiang Yin; Xiaodong Li; Yong Ge; Cheng Shang; Xinyan Li; Yun Du; Feng Ling. 2021. "Estimating subpixel turbulent heat flux over leads from MODIS thermal infrared imagery with deep learning." 2021, no. : 1-29.
Optical earth observation satellite sensors often provide a coarse spatial resolution (CR) multispectral (MS) image together with a fine spatial resolution (FR) panchromatic (PAN) image. Pansharpening is a technique applied to such satellite sensor images to generate an FR MS image by injecting spatial detail taken from the FR PAN image while simultaneously preserving the spectral information of MS image. Pansharpening methods are mostly applied on a per-pixel basis and use the PAN image to extract spatial detail. However, many land cover objects in FR satellite sensor images are not illustrated as independent pixels, but as many spatially aggregated pixels that contain important semantic information. In this article, an object-based pansharpening approach, termed object-based area-to-point regression kriging (OATPRK), is proposed. OATPRK aims to fuse the MS and PAN images at the object-based scale and, thus, takes advantage of both the unified spectral information within the CR MS images and the spatial detail of the FR PAN image. OATPRK is composed of three stages: image segmentation, object-based regression, and residual downscaling. Three data sets acquired from IKONOS and Worldview-2 and 11 benchmark pansharpening algorithms were used to provide a comprehensive assessment of the proposed OATPRK approach. In both the synthetic and real experiments, OATPRK produced the most superior pan-sharpened results in terms of visual and quantitative assessment. OATPRK is a new conceptual method that advances the pixel-level geostatistical pansharpening approach to the object level and provides more accurate pan-sharpened MS images.
Yihang Zhang; Peter M. Atkinson; Feng Ling; Giles M. Foody; Qunming Wang; Yong Ge; Xiaodong Li; Yun Du. Object-Based Area-to-Point Regression Kriging for Pansharpening. IEEE Transactions on Geoscience and Remote Sensing 2020, PP, 1 -16.
AMA StyleYihang Zhang, Peter M. Atkinson, Feng Ling, Giles M. Foody, Qunming Wang, Yong Ge, Xiaodong Li, Yun Du. Object-Based Area-to-Point Regression Kriging for Pansharpening. IEEE Transactions on Geoscience and Remote Sensing. 2020; PP (99):1-16.
Chicago/Turabian StyleYihang Zhang; Peter M. Atkinson; Feng Ling; Giles M. Foody; Qunming Wang; Yong Ge; Xiaodong Li; Yun Du. 2020. "Object-Based Area-to-Point Regression Kriging for Pansharpening." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-16.
Google Earth Engine (GEE) provides a convenient platform for applications based on optical satellite imagery of large areas. With such data sets, the detection of cloud is often a necessary prerequisite step. Recently, deep learning-based cloud detection methods have shown their potential for cloud detection but they can only be applied locally, leading to inefficient data downloading time and storage problems. This letter proposes a method to directly perform cloud detection in Landsat-8 imagery in GEE based on deep learning (DeepGEE-CD). A deep convolutional neural network (DCNN) was first trained locally, and then the trained DCNN was deployed in the JavaScript client of GEE. An experiment was undertaken to validate the proposed method with a set of Landsat-8 images and the results show that DeepGEE-CD outperformed the widely used function of mask (Fmask) algorithm. The proposed DeepGEE-CD approach can accurately detect cloud in Landsat-8 imagery without downloading it, making it a promising method for routine cloud detection of Landsat-8 imagery in GEE.
Zhixiang Yin; Feng Ling; Giles M. Foody; Xinyan Li; Yun Du. Cloud detection in Landsat-8 imagery in Google Earth Engine based on a deep convolutional neural network. Remote Sensing Letters 2020, 11, 1181 -1190.
AMA StyleZhixiang Yin, Feng Ling, Giles M. Foody, Xinyan Li, Yun Du. Cloud detection in Landsat-8 imagery in Google Earth Engine based on a deep convolutional neural network. Remote Sensing Letters. 2020; 11 (12):1181-1190.
Chicago/Turabian StyleZhixiang Yin; Feng Ling; Giles M. Foody; Xinyan Li; Yun Du. 2020. "Cloud detection in Landsat-8 imagery in Google Earth Engine based on a deep convolutional neural network." Remote Sensing Letters 11, no. 12: 1181-1190.
Superresolution mapping (SRM) is a commonly used method to cope with the problem of mixed pixels when predicting the spatial distribution within low-resolution pixels. Central to the popular SRM method is the spatial pattern model, which is utilized to represent the land cover spatial distribution within mixed pixels. The use of an inappropriate spatial pattern model limits such SRM analyses. Alternative approaches, such as deep-learning-based algorithms, which learn the spatial pattern from training data through a convolutional neural network, have been shown to have considerable potential. Deep learning methods, however, are limited by issues such as the way the fraction images are utilized. Here, a novel SRM model based on a generative adversarial network (GAN), GAN-SRM, is proposed that uses an end-to-end network to address the main limitations of existing SRM methods. The potential of the proposed GAN-SRM model was assessed using four land cover subsets and compared to hard classification and several popular SRM methods. The experimental results show that of the set of methods explored, the GAN-SRM model was able to generate the most accurate high-resolution land cover maps.
Cheng Shang; Xiaodong Li; Giles M. Foody; Yun Du; Feng Ling. Superresolution Land Cover Mapping Using a Generative Adversarial Network. IEEE Geoscience and Remote Sensing Letters 2020, PP, 1 -5.
AMA StyleCheng Shang, Xiaodong Li, Giles M. Foody, Yun Du, Feng Ling. Superresolution Land Cover Mapping Using a Generative Adversarial Network. IEEE Geoscience and Remote Sensing Letters. 2020; PP (99):1-5.
Chicago/Turabian StyleCheng Shang; Xiaodong Li; Giles M. Foody; Yun Du; Feng Ling. 2020. "Superresolution Land Cover Mapping Using a Generative Adversarial Network." IEEE Geoscience and Remote Sensing Letters PP, no. 99: 1-5.
Information on the temporal variation of surface water area of reservoirs is fundamental for water resource management and is often monitored by satellite remote sensing. Moderate Resolution Imaging Spectroradiometer (MODIS) imagery is an attractive data source for the routine monitoring of reservoirs, however, the accuracy is often limited due to the negative impacts associated with its coarse spatial resolution and the effects of cloud contamination. Methods have been proposed to solve these two problems independently but it remains challenging to address both problems simultaneously. To overcome this, this paper proposes a new approach that aims to monitor reservoir surface water area variations accurately and timely from daily MODIS images by exploring sub-pixel scale information. The proposed approach used estimates of reservoir water areas obtained from cloud-free and relatively fine spatial resolution Landsat images and water fraction images by spectral unmixing of coarse MODIS imagery as reference data. For each MODIS pixel, these reference reservoir water areas and their corresponding pixel water fractions were used to construct a linear regression equation, which in turn may be applied to predict the time series of reservoir water areas from daily MODIS water fraction images. The proposed approach was assessed with 21 reservoirs, where the correlation coefficients between reservoir water areas predicted by the common pixel-based analysis method and altimetry water levels were all less than 0.5. With the proposed sub-pixel analysis method, the resultant correlation coefficients were much improved, with eleven values larger than 0.5 including six values larger than 0.8 and the highest value of 0.94. The results show that the proposed sub-pixel analysis method is superior to the pixel based analysis method. The proposed method makes it possible to directly estimate the whole reservoir water area from, potentially, an individual cloud-free MODIS pixel, and is a promising way to improve the accuracy in the usability of MODIS images for the monitoring of reservoir surface water area variations.
Feng Ling; Xinyan Li; Giles M. Foody; Doreen Boyd; Yong Ge; Xiaodong Li; Yun Du. Monitoring surface water area variations of reservoirs using daily MODIS images by exploring sub-pixel information. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 168, 141 -152.
AMA StyleFeng Ling, Xinyan Li, Giles M. Foody, Doreen Boyd, Yong Ge, Xiaodong Li, Yun Du. Monitoring surface water area variations of reservoirs using daily MODIS images by exploring sub-pixel information. ISPRS Journal of Photogrammetry and Remote Sensing. 2020; 168 ():141-152.
Chicago/Turabian StyleFeng Ling; Xinyan Li; Giles M. Foody; Doreen Boyd; Yong Ge; Xiaodong Li; Yun Du. 2020. "Monitoring surface water area variations of reservoirs using daily MODIS images by exploring sub-pixel information." ISPRS Journal of Photogrammetry and Remote Sensing 168, no. : 141-152.
Due to the tradeoff between spatial and temporal resolutions commonly encountered in remote sensing, no single satellite sensor can provide fine spatial resolution land surface temperature (LST) products with frequent coverage. This situation greatly limits applications that require LST data with fine spatiotemporal resolution. Here, a deep learning-based spatiotemporal temperature fusion network (STTFN) method for the generation of fine spatiotemporal resolution LST products is proposed. In STTFN, a multiscale fusion convolutional neural network is employed to build the complex nonlinear relationship between input and output LSTs. Thus, unlike other LST spatiotemporal fusion approaches, STTFN is able to form the potentially complicated relationships through the use of training data without manually designed mathematical rules making it is more flexible and intelligent than other methods. In addition, two target fine spatial resolution LST images are predicted and then integrated by a spatiotemporal-consistency (STC)-weighting function to take advantage of STC of LST data. A set of analyses using two real LST data sets obtained from Landsat and moderate resolution imaging spectroradiometer (MODIS) were undertaken to evaluate the ability of STTFN to generate fine spatiotemporal resolution LST products. The results show that, compared with three classic fusion methods [the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM), the spatiotemporal integrated temperature fusion model (STITFM), and the two-stream convolutional neural network for spatiotemporal image fusion (StfNet)], the proposed network produced the most accurate outputs [average root mean square error (RMSE) < 1.40 °C and average structural similarity (SSIM) > 0.971].
Zhixiang Yin; Penghai Wu; Giles M. Foody; Yanlan Wu; Zihan Liu; Yun Du; Feng Ling. Spatiotemporal Fusion of Land Surface Temperature Based on a Convolutional Neural Network. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 1808 -1822.
AMA StyleZhixiang Yin, Penghai Wu, Giles M. Foody, Yanlan Wu, Zihan Liu, Yun Du, Feng Ling. Spatiotemporal Fusion of Land Surface Temperature Based on a Convolutional Neural Network. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (2):1808-1822.
Chicago/Turabian StyleZhixiang Yin; Penghai Wu; Giles M. Foody; Yanlan Wu; Zihan Liu; Yun Du; Feng Ling. 2020. "Spatiotemporal Fusion of Land Surface Temperature Based on a Convolutional Neural Network." IEEE Transactions on Geoscience and Remote Sensing 59, no. 2: 1808-1822.
The generation of land cover maps with both fine spatial and temporal resolution would aid the monitoring of change on the Earth’s surface. Spatio-temporal sub-pixel land cover mapping (STSPM) uses a few fine spatial resolution (FR) maps and a time series of coarse spatial resolution (CR) remote sensing images as input to generate FR land cover maps with a temporal frequency of the CR data set. Traditional STSPM selects spatially adjacent FR pixels within a local window as neighborhoods to model the land cover spatial dependence, which can be a source of error and uncertainty in the maps generated by the analysis. This paper proposes a new STSPM using FR remote sensing images that pre- and/or post-date the CR image as ancillary data to enhance the quality of the FR map outputs. Spectrally similar pixels within the locality of a target FR pixel in the ancillary data are likely to represent the same land cover class and hence such same-class pixels can provide spatial information to aid the analysis. Experimental results showed that the proposed STSPM predicted land cover maps more accurately than two comparative state-of-the-art STSPM algorithms.
Xiaodong Li; Rui Chen; Giles M. Foody; Lihui Wang; Xiaohong Yang; Yun Du; Feng Ling. Spatio-Temporal Sub-Pixel Land Cover Mapping of Remote Sensing Imagery Using Spatial Distribution Information From Same-Class Pixels. Remote Sensing 2020, 12, 503 .
AMA StyleXiaodong Li, Rui Chen, Giles M. Foody, Lihui Wang, Xiaohong Yang, Yun Du, Feng Ling. Spatio-Temporal Sub-Pixel Land Cover Mapping of Remote Sensing Imagery Using Spatial Distribution Information From Same-Class Pixels. Remote Sensing. 2020; 12 (3):503.
Chicago/Turabian StyleXiaodong Li; Rui Chen; Giles M. Foody; Lihui Wang; Xiaohong Yang; Yun Du; Feng Ling. 2020. "Spatio-Temporal Sub-Pixel Land Cover Mapping of Remote Sensing Imagery Using Spatial Distribution Information From Same-Class Pixels." Remote Sensing 12, no. 3: 503.
Spatio-temporal image fusion methods have become a popular means to produce remotely sensed data sets that have both fine spatial and temporal resolution. Accurate prediction of reflectance change is difficult, especially when the change is caused by both phenological change and land cover class changes. Although several spatio-temporal fusion methods such as the Flexible Spatiotemporal DAta Fusion (FSDAF) directly derive land cover phenological change information (such as endmember change) at different dates, the direct derivation of land cover class change information is challenging. In this paper, an enhanced FSDAF that incorporates sub-pixel class fraction change information (SFSDAF) is proposed. By directly deriving the sub-pixel land cover class fraction change information the proposed method allows accurate prediction even for heterogeneous regions that undergo a land cover class change. In particular, SFSDAF directly derives fine spatial resolution endmember change and class fraction change at the date of the observed image pair and the date of prediction, which can help identify image reflectance change resulting from different sources. SFSDAF predicts a fine resolution image at the time of acquisition of coarse resolution images using only one prior coarse and fine resolution image pair, and accommodates variations in reflectance due to both natural fluctuations in class spectral response (e.g. due to phenology) and land cover class change. The method is illustrated using degraded and real images and compared against three established spatio-temporal methods. The results show that the SFSDAF produced the least blurred images and the most accurate predictions of fine resolution reflectance values, especially for regions of heterogeneous landscape and regions that undergo some land cover class change. Consequently, the SFSDAF has considerable potential in monitoring Earth surface dynamics.
Xiaodong Li; Giles M. Foody; Doreen Boyd; Yong Ge; Yihang Zhang; Yun Du; Feng Ling. SFSDAF: An enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion. Remote Sensing of Environment 2019, 237, 111537 .
AMA StyleXiaodong Li, Giles M. Foody, Doreen Boyd, Yong Ge, Yihang Zhang, Yun Du, Feng Ling. SFSDAF: An enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion. Remote Sensing of Environment. 2019; 237 ():111537.
Chicago/Turabian StyleXiaodong Li; Giles M. Foody; Doreen Boyd; Yong Ge; Yihang Zhang; Yun Du; Feng Ling. 2019. "SFSDAF: An enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion." Remote Sensing of Environment 237, no. : 111537.
A strong relationship between night-time light (NTL) data and the areal extent of urbanized regions has been observed frequently. As urban regions have an important vertical dimension, it is hypothesized that the strength of the relationship with NTL can be increased by consideration of the volume rather than simply the area of urbanized land. Relationships between NTL and the area and volume of urbanized land were determined for a set of towns and cities in the UK, the conterminous states of the USA and countries of the European Union. Strong relationships between NTL and the area urbanized were observed, with correlation coefficients ranging from 0.9282 to 0.9446. Higher correlation coefficients were observed for the relationship between NTL and urban building volume, ranging from 0.9548 to 0.9604; The difference in the correlations obtained with volume and with area was statistically significant at the 95% level of confidence. Studies using NTL data may be strengthened by consideration of the volume rather than just area of urbanized land.
Lingfei Shi; Giles M Foody; Doreen Boyd; Renoy Girindran; Lihui Wang; Yun Du; Feng Ling. Night-time lights are more strongly related to urban building volume than to urban area. Remote Sensing Letters 2019, 11, 29 -36.
AMA StyleLingfei Shi, Giles M Foody, Doreen Boyd, Renoy Girindran, Lihui Wang, Yun Du, Feng Ling. Night-time lights are more strongly related to urban building volume than to urban area. Remote Sensing Letters. 2019; 11 (1):29-36.
Chicago/Turabian StyleLingfei Shi; Giles M Foody; Doreen Boyd; Renoy Girindran; Lihui Wang; Yun Du; Feng Ling. 2019. "Night-time lights are more strongly related to urban building volume than to urban area." Remote Sensing Letters 11, no. 1: 29-36.
Super-resolution mapping (SRM) is used to obtain fine-scale land cover maps from coarse remote sensing images. Spatial attraction, geostatistics, and using prior geographic information are conventional approaches used to derive fine-scale land cover maps. As the convolutional neural network (CNN) has been shown to be effective in capturing the spatial characteristics of geographic objects and extrapolating calibrated methods to other study areas, it may be a useful approach to overcome limitations of current SRM methods. In this paper, a new SRM method based on the CNN (SRMCNN) is proposed and tested. Specifically, an encoder-decoder CNN is used to model the nonlinear relationship between coarse remote sensing images and fine-scale land cover maps. Two real-image experiments were conducted to analyze the effectiveness of the proposed method. The results demonstrate that the overall accuracy of the proposed SRMCNN method was 3% to 5% higher than that of two existing SRM methods. Moreover, the proposed SRMCNN method was validated by visualizing output features and analyzing the performance of different geographic objects.
Yuanxin Jia; Yong Ge; Yuehong Chen; Sanping Li; Gerard B.M. Heuvelink; Feng Ling. Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network. Remote Sensing 2019, 11, 1815 .
AMA StyleYuanxin Jia, Yong Ge, Yuehong Chen, Sanping Li, Gerard B.M. Heuvelink, Feng Ling. Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network. Remote Sensing. 2019; 11 (15):1815.
Chicago/Turabian StyleYuanxin Jia; Yong Ge; Yuehong Chen; Sanping Li; Gerard B.M. Heuvelink; Feng Ling. 2019. "Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network." Remote Sensing 11, no. 15: 1815.