This page has only limited features, please log in for full access.
Image intensifiers are used internationally as advanced military night-vision devices. They have better imaging performance in low-light-level conditions than CMOS/CCD. The intensified CMOS (ICMOS) was developed to satisfy the digital demand of image intensifiers. In order to make the ICMOS capable of color imaging in low-light-level conditions, a liquid-crystal tunable filter based color imaging ICMOS was developed. Due to the time-division color imaging scheme, motion artifacts may be introduced when a moving target is in the scene. To solve this problem, a deformable kernel prediction neural network (DKPNN) is proposed for joint denoising and motion artifact removal, and a data generation method which generates images with color-channel motion artifacts is also proposed to train the DKPNN. The results show that, compared with other denoising methods, the proposed DKPNN performed better both on generated noisy data and on real noisy data. Therefore, the proposed DKPNN is more suitable for color ICMOS denoising and motion artifact removal. A new exploration was made for low-light-level color imaging schemes.
Zhenghao Han; Li Li; Weiqi Jin; Xia Wang; Gangcheng Jiao; Xuan Liu; Hailin Wang. Denoising and Motion Artifact Removal Using Deformable Kernel Prediction Neural Network for Color-Intensified CMOS. Sensors 2021, 21, 3891 .
AMA StyleZhenghao Han, Li Li, Weiqi Jin, Xia Wang, Gangcheng Jiao, Xuan Liu, Hailin Wang. Denoising and Motion Artifact Removal Using Deformable Kernel Prediction Neural Network for Color-Intensified CMOS. Sensors. 2021; 21 (11):3891.
Chicago/Turabian StyleZhenghao Han; Li Li; Weiqi Jin; Xia Wang; Gangcheng Jiao; Xuan Liu; Hailin Wang. 2021. "Denoising and Motion Artifact Removal Using Deformable Kernel Prediction Neural Network for Color-Intensified CMOS." Sensors 21, no. 11: 3891.
RGBN cameras that can capture visible light and near-infrared (NIR) light simultaneously produce better color image quality in low-light-level conditions. However, these RGBN cameras introduce additional color bias caused by the mixing of visible information and NIR information. The color correction matrix model widely used in current commercial color digital cameras cannot handle the complicated mapping function between biased color and ground truth color. Convolutional neural networks (CNNs) are good at fitting such complicated relationships, but they require a large quantity of training image pairs of different scenes. In order to achieve satisfactory training results, large amounts of data must be captured manually, even when data augmentation techniques are applied, requiring significant time and effort. Hence, a data generation method for training pairs that are consistent with target RGBN camera parameters, based on an open access RGB-NIR dataset, is proposed. The proposed method is verified by training an RGBN camera color restoration CNN model with generated data. The results show that the CNN model trained with the generated data can achieve satisfactory RGBN color restoration performance with different RGBN sensors.
Zhenghao Han; Li Li; Weiqi Jin; Xia Wang; Gangcheng Jiao; Hailin Wang. Convolutional Neural Network Training for RGBN Camera Color Restoration Using Generated Image Pairs. IEEE Photonics Journal 2020, 12, 1 -15.
AMA StyleZhenghao Han, Li Li, Weiqi Jin, Xia Wang, Gangcheng Jiao, Hailin Wang. Convolutional Neural Network Training for RGBN Camera Color Restoration Using Generated Image Pairs. IEEE Photonics Journal. 2020; 12 (5):1-15.
Chicago/Turabian StyleZhenghao Han; Li Li; Weiqi Jin; Xia Wang; Gangcheng Jiao; Hailin Wang. 2020. "Convolutional Neural Network Training for RGBN Camera Color Restoration Using Generated Image Pairs." IEEE Photonics Journal 12, no. 5: 1-15.
Zhenghao Han; Weiqi Jin; Li Li; Xia Wang; Xiaofeng Bai; Hailin Wang. Nonlinear Regression Color Correction Method for RGBN Cameras. IEEE Access 2020, 8, 25914 -25926.
AMA StyleZhenghao Han, Weiqi Jin, Li Li, Xia Wang, Xiaofeng Bai, Hailin Wang. Nonlinear Regression Color Correction Method for RGBN Cameras. IEEE Access. 2020; 8 ():25914-25926.
Chicago/Turabian StyleZhenghao Han; Weiqi Jin; Li Li; Xia Wang; Xiaofeng Bai; Hailin Wang. 2020. "Nonlinear Regression Color Correction Method for RGBN Cameras." IEEE Access 8, no. : 25914-25926.
The dynamic range of night vision scenes is typically very large. Owing to the limited dynamic range of the traditional low-light-level imaging technology, the captured images are always partially overexposed or underexposed. Multi-exposure fusion is the most effective method for overcoming the dynamic range limitations of sensors. Recently, deep learning has achieved tremendous progress in many fields. However, only a few breakthroughs have been reported on high-dynamic image fusion with the deep learning method. Additionally, many problems have been reported in conjunctions with commonly used deep-learning methods. In this study, a high-dynamic image fusion algorithm is proposed based on the decomposition convolution neural network and weighted sparse representation. Based on image decomposition, the problem of the acquisition in training samples in network training can be solved. Therefore, the classification accuracy of the network can be improved. Additionally, the decomposition structure reduces the workload of each layer and improves the efficiency and quality of the image fusion outcome.
Guo Chen; Li Li; Wei Qi Jin; Shuo Li. High-Dynamic Range, Night Vision, Image-Fusion Algorithm Based on a Decomposition Convolution Neural Network. IEEE Access 2019, 7, 169762 -169772.
AMA StyleGuo Chen, Li Li, Wei Qi Jin, Shuo Li. High-Dynamic Range, Night Vision, Image-Fusion Algorithm Based on a Decomposition Convolution Neural Network. IEEE Access. 2019; 7 (99):169762-169772.
Chicago/Turabian StyleGuo Chen; Li Li; Wei Qi Jin; Shuo Li. 2019. "High-Dynamic Range, Night Vision, Image-Fusion Algorithm Based on a Decomposition Convolution Neural Network." IEEE Access 7, no. 99: 169762-169772.
Generally, the dynamic range of night vision scenes is large. Owing to the limited dynamic range of traditional low light imaging technology, the captured images are always partially overexposed or underexposed. Multi-exposure fusion is the most effective method of overcoming the dynamic range limitation of sensor, and multi-frame low dynamic range (LDR) image fusion is a key consideration. However, existing fusion methods have problems such as image detail blurring and image aliasing. This paper proposes an image multi-scale decomposition method based on gradient domain guided filter (GDGF), which can better extract image details. The fusion algorithm adopts different fusion strategies for different scales. The low-frequency layer of the image uses a new weighted sparse representation (wSR) method, which can eliminate the image boundary problems and more adequately retain the image edges.
Guo Chen; Li Li; Weiqi Jin; Su Qiu; Hui Guo. Weighted Sparse Representation and Gradient Domain Guided Filter Pyramid Image Fusion Based on Low-Light-Level Dual-Channel Camera. IEEE Photonics Journal 2019, 11, 1 -15.
AMA StyleGuo Chen, Li Li, Weiqi Jin, Su Qiu, Hui Guo. Weighted Sparse Representation and Gradient Domain Guided Filter Pyramid Image Fusion Based on Low-Light-Level Dual-Channel Camera. IEEE Photonics Journal. 2019; 11 (5):1-15.
Chicago/Turabian StyleGuo Chen; Li Li; Weiqi Jin; Su Qiu; Hui Guo. 2019. "Weighted Sparse Representation and Gradient Domain Guided Filter Pyramid Image Fusion Based on Low-Light-Level Dual-Channel Camera." IEEE Photonics Journal 11, no. 5: 1-15.
We introduce and verify a single-channel time-division filtering low-light-level (LLL) color night vision system (3LCNV). The imaging scheme, comprising a tunable liquid crystal filter, three-generation GaAsP image intensifier, and CMOS camera, achieves LLL color imaging and ensures sensitivity. The image enhancement and color reconstruction algorithm flow suitable for LLL night vision combines overexposure-against white balance, color correction matrix (CCM) color correction, and color image denoising to improve color visibility and reduce color difference and image noise. The proposed night vision system extends the minimum working illuminance to 10-4 lx and achieves natural and clear color LLL imaging, improving night-time observations.
Tao Yuan; Zhenghao Han; Li Li; Weiqi Jin; Xia Wang; Hailin Wang; Xiaofeng Bai. Tunable-liquid-crystal-filter-based low-light-level color night vision system and its image processing method. Applied Optics 2019, 58, 4947 -4955.
AMA StyleTao Yuan, Zhenghao Han, Li Li, Weiqi Jin, Xia Wang, Hailin Wang, Xiaofeng Bai. Tunable-liquid-crystal-filter-based low-light-level color night vision system and its image processing method. Applied Optics. 2019; 58 (18):4947-4955.
Chicago/Turabian StyleTao Yuan; Zhenghao Han; Li Li; Weiqi Jin; Xia Wang; Hailin Wang; Xiaofeng Bai. 2019. "Tunable-liquid-crystal-filter-based low-light-level color night vision system and its image processing method." Applied Optics 58, no. 18: 4947-4955.
Most imaging devices lose image information during the acquisition process due to their low dynamic range (LDR). Existing high dynamic range (HDR) imaging techniques have a trade-off with time or spatial resolution, resulting in potential motion blur or image misalignment. Current HDR methods are based on the fusion of multi-frame LDR images and can suffer from blurring of fine details, image aliasing, and image boundary effects. This study developed a dual-channel camera (DCC) to achieve HDR imaging, which can eliminate image motion blur and registration problems. Considering the output characteristics of the camera, we propose a weighted sparse representation multi-scale transform fusion algorithm, which fully preserves the original image information, while eliminating image aliasing and boundary problems in the fused image, resulting in high-quality HDR imaging.
Guo Chen; Li Li; Weiqi Jin; Jin Zhu; Feng Shi. Weighted sparse representation multi-scale transform fusion algorithm for high dynamic range imaging with a low-light dual-channel camera. Optics Express 2019, 27, 10564 -10579.
AMA StyleGuo Chen, Li Li, Weiqi Jin, Jin Zhu, Feng Shi. Weighted sparse representation multi-scale transform fusion algorithm for high dynamic range imaging with a low-light dual-channel camera. Optics Express. 2019; 27 (8):10564-10579.
Chicago/Turabian StyleGuo Chen; Li Li; Weiqi Jin; Jin Zhu; Feng Shi. 2019. "Weighted sparse representation multi-scale transform fusion algorithm for high dynamic range imaging with a low-light dual-channel camera." Optics Express 27, no. 8: 10564-10579.
At the present time, there are many image contrast enhancement methods where the main considerations are detail enhancement, noise suppression, and high contrast suppression. Traditional methods ignore the characteristics of the display or merely consider the display as a whole. However, due to the limited dynamic range of most display devices on the market, the difference between two adjacent grayscales of the display is often below the just noticeable difference of the human visual systems, which causes many image details to be invisible on the display. To solve this problem, we present a preprocessing method for image contrast enhancement. The method combines the characteristics of the human eye and the display to enhance the image by examining the local histogram. When displaying the processed image, the algorithm maintains as much image information as possible, and image details will not be lost due to the limits of the display device. Moreover, this algorithm performs well for noise suppression and high contrast suppression. The algorithm is an image enhancement method and can also be a correction method for images enhanced by other methods when prepared for display.
Guo Chen; Li Li; Weiqi Jin; Mingcong Liu; Feng Shi. Image contrast enhancement method based on display and human visual system characteristics. Applied Optics 2019, 58, 1813 -1823.
AMA StyleGuo Chen, Li Li, Weiqi Jin, Mingcong Liu, Feng Shi. Image contrast enhancement method based on display and human visual system characteristics. Applied Optics. 2019; 58 (7):1813-1823.
Chicago/Turabian StyleGuo Chen; Li Li; Weiqi Jin; Mingcong Liu; Feng Shi. 2019. "Image contrast enhancement method based on display and human visual system characteristics." Applied Optics 58, no. 7: 1813-1823.
Because the optical surfaces inside a simultaneous polarization imaging system and the vignetting of the edge field change the polarization state of incident light, the polarization information reconstructed by the system can be inaccurate. We propose a point-by-point instrument matrix calibration method considering the edge field with an integrating sphere plus a rotating polarizer as a polarization-controlled light source, and we carry out a non-contact detection experiment of the tilt angle of flat glass using a simultaneous image polarimetry with double separate Wollaston prism. The experimental results indicate that the degree of linear polarization, angle of polarization, and angle of incidence can be reconstructed more accurately by the system calibrated with the point-by-point instrument matrix. It improves the detection accuracy of the polarization state of the transparent medium surface and lays the theoretical foundation for the research of polarization imaging and the application of quantitative detection.
Xiaotian Lu; Jie Yang; Li Li; Weiqi Jin; Hengze Wu; Man Xu. Point by point calibration method for simultaneous polarization imaging system based on large field polarization imaging theory. Optik 2018, 180, 1027 -1035.
AMA StyleXiaotian Lu, Jie Yang, Li Li, Weiqi Jin, Hengze Wu, Man Xu. Point by point calibration method for simultaneous polarization imaging system based on large field polarization imaging theory. Optik. 2018; 180 ():1027-1035.
Chicago/Turabian StyleXiaotian Lu; Jie Yang; Li Li; Weiqi Jin; Hengze Wu; Man Xu. 2018. "Point by point calibration method for simultaneous polarization imaging system based on large field polarization imaging theory." Optik 180, no. : 1027-1035.