This page has only limited features, please log in for full access.
In the past few years, multitask learning (MTL) has been widely used in a single model to solve the problems of multiple businesses. MTL enables each task to achieve high performance and greatly reduces computational resource overhead. In this work, we designed a collaborative network that simultaneously solves the super-resolution semantic segmentation and super-resolution image reconstruction. This algorithm can obtain high-resolution semantic segmentation and super-resolution reconstruction results by taking relatively low-resolution images as input when high-resolution data are inconvenient or computing resources are limited. The framework consists of three parts: the semantic segmentation branch (SSB), the super-resolution branch (SRB), and the structural affinity block (SAB). Specifically, the SSB, SRB, and SAB are responsible for completing super-resolution semantic segmentation, image super-resolution reconstruction, and associated features, respectively. Our proposed method is simple and efficient, and it can replace the different branches with most of the state-of-the-art models. The International Society for Photogrammetry and Remote Sensing (ISPRS) segmentation benchmarks were used to evaluate our models. In particular, super-resolution semantic segmentation on the Potsdam dataset reduced Intersection over Union (IoU) by only 1.8% when the resolution of the input image was reduced by a factor of two. The experimental results showed that our framework can obtain more accurate semantic segmentation and super-resolution reconstruction results than the single model.
Qian Zhang; Guang Yang; Guixu Zhang. Collaborative Network for Super-Resolution and Semantic Segmentation of Remote Sensing Images. IEEE Transactions on Geoscience and Remote Sensing 2021, PP, 1 -12.
AMA StyleQian Zhang, Guang Yang, Guixu Zhang. Collaborative Network for Super-Resolution and Semantic Segmentation of Remote Sensing Images. IEEE Transactions on Geoscience and Remote Sensing. 2021; PP (99):1-12.
Chicago/Turabian StyleQian Zhang; Guang Yang; Guixu Zhang. 2021. "Collaborative Network for Super-Resolution and Semantic Segmentation of Remote Sensing Images." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-12.
Single image dehazing is a challenging ill-posed problem that has drawn significant attention in the last few years. Recently, convolutional neural networks have achieved great success in image dehazing. However, it is still difficult for these increasingly complex models to recover accurate details from the hazy image. In this paper, we pay attention to the feature extraction and utilization of the input image itself. To achieve this, we propose a Multi-scale Topological Network (MSTN) to fully explore the features at different scales. Meanwhile, we design a Multi-scale Feature Fusion Module (MFFM) and an Adaptive Feature Selection Module (AFSM) to achieve the selection and fusion of features at different scales, so as to achieve progressive image dehazing. This topological network provides a large number of search paths that enable the network to extract abundant image features as well as strong fault tolerance and robustness. In addition, ASFM and MFFM can adaptively select important features and ignore interference information when fusing different scale representations. Extensive experiments are conducted to demonstrate the superiority of our method compared with state-of-the-art methods.
Qiaosi Yi; Juncheng Li; Faming Fang; Aiwen Jiang; Guixu Zhang. Efficient and Accurate Multi-scale Topological Network for Single Image Dehazing. IEEE Transactions on Multimedia 2021, PP, 1 -1.
AMA StyleQiaosi Yi, Juncheng Li, Faming Fang, Aiwen Jiang, Guixu Zhang. Efficient and Accurate Multi-scale Topological Network for Single Image Dehazing. IEEE Transactions on Multimedia. 2021; PP (99):1-1.
Chicago/Turabian StyleQiaosi Yi; Juncheng Li; Faming Fang; Aiwen Jiang; Guixu Zhang. 2021. "Efficient and Accurate Multi-scale Topological Network for Single Image Dehazing." IEEE Transactions on Multimedia PP, no. 99: 1-1.
It has been demonstrated that the blurring process reduces the high-frequency information of the original sharp image, so the main challenge for image deblurring is to reconstruct high-frequency information from the blurry image. In this paper, we propose a novel image deblurring framework to focus on the reconstruction of high-frequency information, which consists of two main subnetworks: a high-frequency reconstruction subnetwork (HFRSN) and a multi-scale grid subnetwork (MSGSN). The HFRSN is built to reconstruct latent high-frequency information from multiple scale blurry images. The MSGSN performs deblurring processes with high-frequency guidance at different scales simultaneously. Besides, in order to better use high-frequency information to restore sharpening images, we designed a high-frequency information aggregation (HFAG) module and a high-frequency information attention (HFAT) module in MSGSN. The HFAG module is designed to fuse high-frequency features and image features at the feature extraction stage, and the HFAT module is built to enhance the feature reconstruction stage. Extensive experiments on different datasets show the effectiveness and efficiency of our method.
Yang Liu; Faming Fang; Tingting Wang; Juncheng Li; Yun Sheng; Guixu Zhang. Multi-scale Grid Network for Image Deblurring with High-frequency Guidance. IEEE Transactions on Multimedia 2021, PP, 1 -1.
AMA StyleYang Liu, Faming Fang, Tingting Wang, Juncheng Li, Yun Sheng, Guixu Zhang. Multi-scale Grid Network for Image Deblurring with High-frequency Guidance. IEEE Transactions on Multimedia. 2021; PP (99):1-1.
Chicago/Turabian StyleYang Liu; Faming Fang; Tingting Wang; Juncheng Li; Yun Sheng; Guixu Zhang. 2021. "Multi-scale Grid Network for Image Deblurring with High-frequency Guidance." IEEE Transactions on Multimedia PP, no. 99: 1-1.
Panorama creation is still challenging in consumer-level photography because of varying conditions of image capturing. A long-standing problem is the presence of artifacts caused by structure inconsistent image transitions. Since it is difficult to achieve perfect alignment in unconstrained shooting environment especially with parallax and object movements, image composition becomes a crucial step to produce artifact-free stitching results. Current energy-based seam-cutting image composition approaches are limited by the hand-crafted features, which are not discriminative and adaptive enough to robustly create structure consistent image transitions. In this paper, we present the first end-to-end deep learning framework named Edge Guided Composition Network (EGCNet) for the composition stage in image stitching. We cast the whole composition stage as an image blending problem, and aims to regress the blending weights to seamlessly produce the stitched image. To better preserve the structure consistency, we exploit perceptual edges to guide the network with additional geometric prior. Specifically, we introduce a perceptual edge branch to integrate edge features into the model and propose two edge-aware losses for edge guidance. Meanwhile, we gathered a general-purpose dataset for image stitching training and evaluation (namely, RISD). Extensive experiments demonstrate that our EGCNet produces plausible results with less running time, and outperforms traditional methods especially under the circumstances of parallax and object motions.
Qinyan Dai; Faming Fang; Juncheng Li; Guixu Zhang; Aimin Zhou. Edge-guided Composition Network for Image Stitching. Pattern Recognition 2021, 118, 108019 .
AMA StyleQinyan Dai, Faming Fang, Juncheng Li, Guixu Zhang, Aimin Zhou. Edge-guided Composition Network for Image Stitching. Pattern Recognition. 2021; 118 ():108019.
Chicago/Turabian StyleQinyan Dai; Faming Fang; Juncheng Li; Guixu Zhang; Aimin Zhou. 2021. "Edge-guided Composition Network for Image Stitching." Pattern Recognition 118, no. : 108019.
Semantic segmentation is a fundamental task in remote sensing image processing. It provides pixel-level classification, which is important for many applications, such as building extraction and land use mapping. The development of convolutional neural network has considerably improved the performance of semantic segmentation. Most semantic segmentation networks are the encoder-decoder structure. Bilinear interpolation is an ordinary upsampling method in the decoder, but bilinear interpolation only considers its own features and inserts three times its own features. This over-simple and data-independent bilinear upsampling may lead to suboptimal results. In this work, we propose an upsampling method based on local relations to replace bilinear interpolation. Upsampling is performed by correlating the local relationship of feature maps of adjacent stages, which can better integrate local and global information. We also design a fusion module based on local similarity. Our proposed method with ResNet101 as the backbone of the segmentation network can improve the average F₁ score and overall accuracy of the Vaihingen data set by 2.69% and 1.31%, respectively. Our proposed method also has fewer parameters and less inference time.
Baokai Lin; Guang Yang; Qian Zhang; Guixu Zhang. Semantic Segmentation Network Using Local Relationship Upsampling for Remote Sensing Images. IEEE Geoscience and Remote Sensing Letters 2021, PP, 1 -5.
AMA StyleBaokai Lin, Guang Yang, Qian Zhang, Guixu Zhang. Semantic Segmentation Network Using Local Relationship Upsampling for Remote Sensing Images. IEEE Geoscience and Remote Sensing Letters. 2021; PP (99):1-5.
Chicago/Turabian StyleBaokai Lin; Guang Yang; Qian Zhang; Guixu Zhang. 2021. "Semantic Segmentation Network Using Local Relationship Upsampling for Remote Sensing Images." IEEE Geoscience and Remote Sensing Letters PP, no. 99: 1-5.
Convolutional neural networks have been proven to be of great benefit for single-image super-resolution (SISR). However, previous works do not make full use of multi-scale features and ignore the inter-scale correlation between different upsampling factors, resulting in sub-optimal performance. Instead of blindly increasing the depth of the network, we are committed to mining image features and learning the inter-scale correlation between different upsampling factors. To achieve this, we propose a Multi-scale Dense Cross Network (MDCN), which achieves great performance with fewer parameters and less execution time. MDCN consists of multi-scale dense cross blocks (MDCBs), hierarchical feature distillation block (HFDB), and dynamic reconstruction block (DRB). Among them, MDCB aims to detect multi-scale features and maximize the use of image features flow at different scales, HFDB focuses on adaptively recalibrate channel-wise feature responses to achieve feature distillation, and DRB attempts to reconstruct SR images with different upsampling factors in a single model. It is worth noting that all these modules can run independently. It means that these modules can be selectively plugged into any CNN model to improve model performance. Extensive experiments show that MDCN achieves competitive results in SISR, especially in the reconstruction task with multiple upsampling factors. The code is provided at https://github.com/MIVRC/MDCN-PyTorch.
Juncheng Li; Faming Fang; Jiaqian Li; Kangfu Mei; Guixu Zhang. MDCN: Multi-Scale Dense Cross Network for Image Super-Resolution. IEEE Transactions on Circuits and Systems for Video Technology 2020, 31, 2547 -2561.
AMA StyleJuncheng Li, Faming Fang, Jiaqian Li, Kangfu Mei, Guixu Zhang. MDCN: Multi-Scale Dense Cross Network for Image Super-Resolution. IEEE Transactions on Circuits and Systems for Video Technology. 2020; 31 (7):2547-2561.
Chicago/Turabian StyleJuncheng Li; Faming Fang; Jiaqian Li; Kangfu Mei; Guixu Zhang. 2020. "MDCN: Multi-Scale Dense Cross Network for Image Super-Resolution." IEEE Transactions on Circuits and Systems for Video Technology 31, no. 7: 2547-2561.
Low-light image enhancement based on deep convolutional neural networks (CNNs) has shown prominent performance in recent years. However, it is still a challenging assignment since the underexposed regions and details are always imperceptible. Moreover, these deep models are often accompanied by complex structures and heavy computational burden, which hinder their applications on mobile devices. To settle these problems, in this paper, we propose a lightweight and efficient Luminance-aware Pyramid Network (LPNet) to reconstruct normal-light images in a coarse-to-fine strategy. The architecture comprises two coarse feature extraction branches and a luminance-aware refinement branch with an auxiliary subnet learning the luminance map of the input and target images. In addition, we propose a multi-scale contrast feature block (MSCFB) that involves channel split, channel shuffle strategies and contrast attention mechanism. MSCFB is the basic component of our network, which strikes an excellent tradeoff between image quality and model size. In this way, the proposed method can not only brighten up low-light images with rich details and high contrast, but also greatly ameliorate the execution speed. Extensive experiments demonstrate that the performance of our LPNet outperforms state-of-the-art methods by a large margin.
Jiaqian Li; Juncheng Li; Faming Fang; Fang Li; Guixu Zhang. Luminance-aware Pyramid Network for Low-light Image Enhancement. IEEE Transactions on Multimedia 2020, PP, 1 -1.
AMA StyleJiaqian Li, Juncheng Li, Faming Fang, Fang Li, Guixu Zhang. Luminance-aware Pyramid Network for Low-light Image Enhancement. IEEE Transactions on Multimedia. 2020; PP (99):1-1.
Chicago/Turabian StyleJiaqian Li; Juncheng Li; Faming Fang; Fang Li; Guixu Zhang. 2020. "Luminance-aware Pyramid Network for Low-light Image Enhancement." IEEE Transactions on Multimedia PP, no. 99: 1-1.
Image denoising is a challenging inverse problem due to complex scenes and information loss. Recently, various methods have been considered to solve this problem by building a well-designed convolutional neural network (CNN) or introducing some hand-designed image priors. Different from previous works, we investigate a new framework for image denoising, which integrates edge detection, edge guidance, and image denoising into an end-to-end CNN model. To achieve this goal, we propose a multilevel edge features guided network (MLEFGN). First, we build an edge reconstruction network (Edge-Net) to directly predict clear edges from the noisy image. Then, the Edge-Net is embedded as part of the model to provide edge priors, and a dual-path network is applied to extract the image and edge features, respectively. Finally, we introduce a multilevel edge features guidance mechanism for image denoising. To the best of our knowledge, the Edge-Net is the first CNN model specially designed to reconstruct image edges from the noisy image, which shows good accuracy and robustness on natural images. Extensive experiments clearly illustrate that our MLEFGN achieves favorable performance against other methods and plenty of ablation studies demonstrate the effectiveness of our proposed Edge-Net and MLEFGN. The code is available at https://github.com/MIVRC/MLEFGN-PyTorch.
Faming Fang; Juncheng Li; Yiting Yuan; Tieyong Zeng; Guixu Zhang. Multilevel Edge Features Guided Network for Image Denoising. IEEE Transactions on Neural Networks and Learning Systems 2020, PP, 1 -15.
AMA StyleFaming Fang, Juncheng Li, Yiting Yuan, Tieyong Zeng, Guixu Zhang. Multilevel Edge Features Guided Network for Image Denoising. IEEE Transactions on Neural Networks and Learning Systems. 2020; PP (99):1-15.
Chicago/Turabian StyleFaming Fang; Juncheng Li; Yiting Yuan; Tieyong Zeng; Guixu Zhang. 2020. "Multilevel Edge Features Guided Network for Image Denoising." IEEE Transactions on Neural Networks and Learning Systems PP, no. 99: 1-15.
Deep learning methods have been used to extract buildings from remote sensing images and have achieved state-of-the-art performance. Most previous work has emphasized the multi-scale fusion of features or the enhancement of more receptive fields to achieve global features rather than focusing on low-level details such as the edges. In this work, we propose a novel end-to-end edge-aware network, the EANet, and an edge-aware loss for getting accurate buildings from aerial images. Specifically, the architecture is composed of image segmentation networks and edge perception networks that, respectively, take charge of building prediction and edge investigation. The International Society for Photogrammetry and Remote Sensing (ISPRS) Potsdam segmentation benchmark and the Wuhan University (WHU) building benchmark were used to evaluate our approach, which, respectively, was found to achieve 90.19% and 93.33% intersection-over-union and top performance without using additional datasets, data augmentation, and post-processing. The EANet is effective in extracting buildings from aerial images, which shows that the quality of image segmentation can be improved by focusing on edge details.
Guang Yang; Qian Zhang; Guixu Zhang. EANet: Edge-Aware Network for the Extraction of Buildings from Aerial Images. Remote Sensing 2020, 12, 2161 .
AMA StyleGuang Yang, Qian Zhang, Guixu Zhang. EANet: Edge-Aware Network for the Extraction of Buildings from Aerial Images. Remote Sensing. 2020; 12 (13):2161.
Chicago/Turabian StyleGuang Yang; Qian Zhang; Guixu Zhang. 2020. "EANet: Edge-Aware Network for the Extraction of Buildings from Aerial Images." Remote Sensing 12, no. 13: 2161.
Image stitching aims to generate a natural seamless high-resolution panoramic image free of distortions or artifacts as fast as possible. In this article, we propose a new seam cutting strategy based on superpixels for unmanned aerial vehicle (UAV) image stitching. Explicitly, we decompose the issue into three steps: image registration, seam cutting, and image blending. First, we employ adaptive as-natural-as-possible (AANAP) warps for registration, obtaining two aligned images in the same coordinate system. Then, we propose a novel superpixel-based energy function that integrates color difference, gradient difference, and texture complexity information to search a perceptually optimal seam located in continuous areas with high similarity. We apply the graph cut algorithm to solve the problem and thereby conceal artifacts in the overlapping area. Finally, we utilize a superpixel-based color blending approach to eliminate visible seams and achieve natural color transitions. Experimental results demonstrate that our method can effectively and efficiently realize seamless stitching, and is superior to several state-of-the-art methods in UAV image stitching.
Yiting Yuan; Faming Fang; Guixu Zhang. Superpixel-Based Seamless Image Stitching for UAV Images. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 1565 -1576.
AMA StyleYiting Yuan, Faming Fang, Guixu Zhang. Superpixel-Based Seamless Image Stitching for UAV Images. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (2):1565-1576.
Chicago/Turabian StyleYiting Yuan; Faming Fang; Guixu Zhang. 2020. "Superpixel-Based Seamless Image Stitching for UAV Images." IEEE Transactions on Geoscience and Remote Sensing 59, no. 2: 1565-1576.
In this paper, we propose a novel Retinex-based fractional-order variational model for severely low-light images. The proposed method is more flexible in controlling the regularization extent than the existing integer-order regularization methods. Specifically, we decompose directly in the image domain and perform the fractional-order gradient total variation regularization on both the reflectance component and the illumination component to get more appropriate estimated results. The merits of the proposed method are as follows: 1) small-magnitude details are maintained in the estimated reflectance. 2) illumination components are effectively removed from the estimated reflectance. 3) the estimated illumination is more likely piecewise smooth. We compare the proposed method with other closely related Retinex-based methods. Experimental results demonstrate the effectiveness of the proposed method.
Zhihao Gu; Fang Li; Faming Fang; Guixu Zhang. A Novel Retinex-Based Fractional-Order Variational Model for Images With Severely Low Light. IEEE Transactions on Image Processing 2019, 29, 3239 -3253.
AMA StyleZhihao Gu, Fang Li, Faming Fang, Guixu Zhang. A Novel Retinex-Based Fractional-Order Variational Model for Images With Severely Low Light. IEEE Transactions on Image Processing. 2019; 29 (99):3239-3253.
Chicago/Turabian StyleZhihao Gu; Fang Li; Faming Fang; Guixu Zhang. 2019. "A Novel Retinex-Based Fractional-Order Variational Model for Images With Severely Low Light." IEEE Transactions on Image Processing 29, no. 99: 3239-3253.
In this paper, we investigate the challenging task of removing haze from a single natural image. The analysis on the haze formation model shows that the atmospheric veil has much less relevance to chrominance than luminance, which motivates us to neglect the haze in the chrominance channel and concentrate on the luminance channel in the dehazing process. Besides, the experimental study illustrates that the YUV color space is the most suitable for image dehazing. Accordingly, a variational model is proposed in the Y channel of the YUV color space by combining the reformulation of the haze model and the two effective priors. As we mainly focus on the Y channel, most of the chrominance information of the image is preserved after dehazing. The numerical procedure based on the alternating direction method of multipliers (ADMM) scheme is presented to obtain the optimal solution. Extensive experimental results on real-world hazy images and synthetic dataset demonstrate clearly that our method can unveil the details and recover vivid color information, which is competitive among many existing dehazing algorithms. Further experiments show that our model also can be applied for image enhancement.
Faming Fang; Tingting Wang; Yang Wang; Tieyong Zeng; Guixu Zhang. Variational Single Image Dehazing for Enhanced Visualization. IEEE Transactions on Multimedia 2019, 22, 2537 -2550.
AMA StyleFaming Fang, Tingting Wang, Yang Wang, Tieyong Zeng, Guixu Zhang. Variational Single Image Dehazing for Enhanced Visualization. IEEE Transactions on Multimedia. 2019; 22 (10):2537-2550.
Chicago/Turabian StyleFaming Fang; Tingting Wang; Yang Wang; Tieyong Zeng; Guixu Zhang. 2019. "Variational Single Image Dehazing for Enhanced Visualization." IEEE Transactions on Multimedia 22, no. 10: 2537-2550.
Interference between the grids of the camera sensor and the screen cause moiré patterns to always appear on photographs captured from a screen, significantly affecting people’s ability to review images. We propose a novel method to remove such a screen moiré pattern from a single image. We characterize the degraded image as a composition of two layers: the latent layer and the moiré pattern layer. Because the screen moiré pattern is global and content-independent, we regard it as a group of sublayers, and we find that each sublayer after the shear transformation has a low-rank property. Combined with the piecewise constant feature of the latent layer, a convex model is proposed to solve the demoiréing problem. Experiments on synthetic and real data demonstrate its feasibility and efficiency.
Faming Fang; Tingting Wang; Shuyan Wu; Guixu Zhang. Removing moiré patterns from single images. Information Sciences 2019, 514, 56 -70.
AMA StyleFaming Fang, Tingting Wang, Shuyan Wu, Guixu Zhang. Removing moiré patterns from single images. Information Sciences. 2019; 514 ():56-70.
Chicago/Turabian StyleFaming Fang; Tingting Wang; Shuyan Wu; Guixu Zhang. 2019. "Removing moiré patterns from single images." Information Sciences 514, no. : 56-70.
Pan-sharpening is a method of integrating low-resolution multispectral images with corresponding high-resolution panchromatic images to obtain multispectral images with high spectral and spatial resolution. A novel variational model for pan-sharpening is proposed in this paper. The model is mainly based on three hypotheses: 1) the pan-sharpened image can be linearly represented by the corresponding panchromatic image; 2) the low-resolution multispectral image is down-sampled from the highresolution multispectral image through the down-sampling operator; and 3) the satellite image has the low-rank property. Three energy components corresponding to these assumptions are integrated into a variational framework to obtain a total energy function. We adopt the alternating direction method of multipliers (ADMM) to optimize the total energy function. The experimental results show that the proposed method performs better than other mainstream methods in spectral and spatial information preserving aspect.
Yingxia Chen; Tingting Wang; Faming Fang; Guixu Zhang. A pan-sharpening method based on the ADMM algorithm. Frontiers of Earth Science 2019, 13, 656 -667.
AMA StyleYingxia Chen, Tingting Wang, Faming Fang, Guixu Zhang. A pan-sharpening method based on the ADMM algorithm. Frontiers of Earth Science. 2019; 13 (3):656-667.
Chicago/Turabian StyleYingxia Chen; Tingting Wang; Faming Fang; Guixu Zhang. 2019. "A pan-sharpening method based on the ADMM algorithm." Frontiers of Earth Science 13, no. 3: 656-667.
In this paper we propose a new area-based convexity measure. We assume that convexity evaluation of an arbitrary planar shape is related to the total influence of dents of the shape, and discover that those attributes of the dents, such as the position, area, and depth with respect to the Geometric Center of Convex Hull (GCCH) of the shape, determine the dent influence. We consider that the convex hull of the shape consists of infinitely small patches, to each of which we assign a weight showing the patch influence. We can simply integrate all the patch weights in any regions within the convex hull to calculate their total influence. We define this operation as the Distance Weighted Area Integration, if the weight is associated with the Euclidean distance from the patch to the GCCH. Our new measure is a distance weighted generalization of the most commonly used convexity measure, making this conventional measure fully replaceable for the first time. Experiments demonstrate advantages of the new convexity measure against the existing ones.
Rui Li; Xiayan Shi; Yun Sheng; Guixu Zhang. A new area-based convexity measure with distance weighted area integration for planar shapes. Computer Aided Geometric Design 2019, 71, 176 -189.
AMA StyleRui Li, Xiayan Shi, Yun Sheng, Guixu Zhang. A new area-based convexity measure with distance weighted area integration for planar shapes. Computer Aided Geometric Design. 2019; 71 ():176-189.
Chicago/Turabian StyleRui Li; Xiayan Shi; Yun Sheng; Guixu Zhang. 2019. "A new area-based convexity measure with distance weighted area integration for planar shapes." Computer Aided Geometric Design 71, no. : 176-189.
Image colorization refers to a computer-assisted process that adds colors to grayscale images. It is a challenging task since there is usually no one-to-one correspondence between color and local texture. In this paper, we tackle this issue by exploiting weighted nonlocal self-similarity and local consistency constraints at the resolution of superpixels. Given a grayscale target image, we first select a color source image containing similar segments to target image and extract multi-level features of each superpixel in both images after superpixel segmentation. Then a set of color candidates for each target superpixel is selected by adopting a top-down feature matching scheme with confidence assignment. Finally, we propose a variational approach to determine the most appropriate color for each target superpixel from color candidates. Experiments demonstrate the effectiveness of the proposed method and show its superiority to other state-of-the-art methods. Furthermore, our method can be easily extended to color transfer between two color images.
Faming Fang; Tingting Wang; Tieyong Zeng; Guixu Zhang. A Superpixel-Based Variational Model for Image Colorization. IEEE Transactions on Visualization and Computer Graphics 2019, 26, 2931 -2943.
AMA StyleFaming Fang, Tingting Wang, Tieyong Zeng, Guixu Zhang. A Superpixel-Based Variational Model for Image Colorization. IEEE Transactions on Visualization and Computer Graphics. 2019; 26 (10):2931-2943.
Chicago/Turabian StyleFaming Fang; Tingting Wang; Tieyong Zeng; Guixu Zhang. 2019. "A Superpixel-Based Variational Model for Image Colorization." IEEE Transactions on Visualization and Computer Graphics 26, no. 10: 2931-2943.
Faming Fang; Tingting Wang; Yingying Fang; Guixu Zhang. Fast Color Blending for Seamless Image Stitching. IEEE Geoscience and Remote Sensing Letters 2019, 16, 1115 -1119.
AMA StyleFaming Fang, Tingting Wang, Yingying Fang, Guixu Zhang. Fast Color Blending for Seamless Image Stitching. IEEE Geoscience and Remote Sensing Letters. 2019; 16 (7):1115-1119.
Chicago/Turabian StyleFaming Fang; Tingting Wang; Yingying Fang; Guixu Zhang. 2019. "Fast Color Blending for Seamless Image Stitching." IEEE Geoscience and Remote Sensing Letters 16, no. 7: 1115-1119.
Chaomin Shen; Mixue Yu; Chenxiao Zhao; Yaxin Peng; Guixu Zhang. Parallel Hashing Using Representative Points in Hyperoctants. Proceedings of the 27th ACM International Conference on Information and Knowledge Management 2018, 813 -822.
AMA StyleChaomin Shen, Mixue Yu, Chenxiao Zhao, Yaxin Peng, Guixu Zhang. Parallel Hashing Using Representative Points in Hyperoctants. Proceedings of the 27th ACM International Conference on Information and Knowledge Management. 2018; ():813-822.
Chicago/Turabian StyleChaomin Shen; Mixue Yu; Chenxiao Zhao; Yaxin Peng; Guixu Zhang. 2018. "Parallel Hashing Using Representative Points in Hyperoctants." Proceedings of the 27th ACM International Conference on Information and Knowledge Management , no. : 813-822.
Recent studies have shown that deep neural networks can significantly improve the quality of single-image super-resolution. Current researches tend to use deeper convolutional neural networks to enhance performance. However, blindly increasing the depth of the network cannot ameliorate the network effectively. Worse still, with the depth of the network increases, more problems occurred in the training process and more training tricks are needed. In this paper, we propose a novel multi-scale residual network (MSRN) to fully exploit the image features, which outperform most of the state-of-the-art methods. Based on the residual block, we introduce convolution kernels of different sizes to adaptively detect the image features in different scales. Meanwhile, we let these features interact with each other to get the most efficacious image information, we call this structure Multi-scale Residual Block (MSRB). Furthermore, the outputs of each MSRB are used as the hierarchical features for global feature fusion. Finally, all these features are sent to the reconstruction module for recovering the high-quality image.
Juncheng Li; Faming Fang; Kangfu Mei; Guixu Zhang. Multi-scale Residual Network for Image Super-Resolution. Lecture Notes in Computer Science 2018, 527 -542.
AMA StyleJuncheng Li, Faming Fang, Kangfu Mei, Guixu Zhang. Multi-scale Residual Network for Image Super-Resolution. Lecture Notes in Computer Science. 2018; ():527-542.
Chicago/Turabian StyleJuncheng Li; Faming Fang; Kangfu Mei; Guixu Zhang. 2018. "Multi-scale Residual Network for Image Super-Resolution." Lecture Notes in Computer Science , no. : 527-542.
Pansharpening is a process of acquiring a multispectral image with high spatial resolution by fusing a low resolution multi-spectral image with a corresponding high resolution panchromatic image. In this paper, a new pansharpening method based on the Bayesian theory is proposed. The algorithm is mainly based on three assumptions: 1) the geometric information contained in the pan-sharpened image is coincident with that contained in the panchromatic image, 2) the pan-sharpened image and the original multi-spectral image should share the same spectral information, 3) in each pan-sharpened image channel, the neighbouring pixels not around the edges are similar. We build our posterior probability model according to above assumptions and solve it by the alternating direction method of multipliers. The experiments at reduced and full resolution show that the proposed method outperforms other state-of-the-art pansharpening methods. Besides, we verify that the new algorithm is effective in preserving spectral and spatial information with high reliability. Further experiments also show that the proposed method can be successfully extended to hyperspectral image fusion.
Tingting Wang; Faming Fang; Fang Li; Guixu Zhang. High-Quality Bayesian Pansharpening. IEEE Transactions on Image Processing 2018, 28, 227 -239.
AMA StyleTingting Wang, Faming Fang, Fang Li, Guixu Zhang. High-Quality Bayesian Pansharpening. IEEE Transactions on Image Processing. 2018; 28 (1):227-239.
Chicago/Turabian StyleTingting Wang; Faming Fang; Fang Li; Guixu Zhang. 2018. "High-Quality Bayesian Pansharpening." IEEE Transactions on Image Processing 28, no. 1: 227-239.