This page has only limited features, please log in for full access.
Feature matching is a fundamental technique in remote sensing image processing. This article proposes a new formulation of affine covariant feature matching for remote sensing images, where we suggest matching features by matching two sets of triplets. Compared with previous works, the formulation exploits the whole feature frame rather than the 2-D location to reject outliers. Besides, we also develop a new latent variable model to combine the feature frame and the SIFT ratio values, to enhance the convergence speed and success rate in challenging cases. We evaluate our model on three challenging datasets in terms of both qualitative and quantitative experiments. We also study the robustness to outliers since remote sensing images are typically affected by mismatches. The results demonstrate that the proposed method provides excellent matching performance with satisfying runtime and shows good robustness to outliers.
Liang Shen; Jiahua Zhu; Chongyi Fan; Xiaotao Huang; Tian Jin. A Novel Affine Covariant Feature Mismatch Removal for Feature Matching. IEEE Transactions on Geoscience and Remote Sensing 2021, PP, 1 -13.
AMA StyleLiang Shen, Jiahua Zhu, Chongyi Fan, Xiaotao Huang, Tian Jin. A Novel Affine Covariant Feature Mismatch Removal for Feature Matching. IEEE Transactions on Geoscience and Remote Sensing. 2021; PP (99):1-13.
Chicago/Turabian StyleLiang Shen; Jiahua Zhu; Chongyi Fan; Xiaotao Huang; Tian Jin. 2021. "A Novel Affine Covariant Feature Mismatch Removal for Feature Matching." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-13.
Radar-based non-contact vital signs monitoring has great value in through-wall detection applications. This paper presents the theoretical and experimental study of through-wall respiration and heartbeat pattern extraction from multiple subjects. To detect the vital signs of multiple subjects, we employ a low-frequency ultra-wideband (UWB) multiple-input multiple-output (MIMO) imaging radar and derive the relationship between radar images and vibrations caused by human cardiopulmonary movements. The derivation indicates that MIMO radar imaging with the stepped-frequency continuous-wave (SFCW) improves the signal-to-noise ratio (SNR) critically by the factor of radar channel number times frequency number compared with continuous-wave (CW) Doppler radars. We also apply the three-dimensional (3-D) higher-order cumulant (HOC) to locate multiple subjects and extract the phase sequence of the radar images as the vital signs signal. To monitor the cardiopulmonary activities, we further exploit the VMD algorithm with a proposed grouping criterion to adaptively separate the respiration and heartbeat patterns. A series of experiments have validated the localization and detection of multiple subjects behind a wall. The VMD algorithm is suitable for separating the weaker heartbeat pattern from the stronger respiration pattern by the grouping criterion. Moreover, the continuous monitoring of heart rate (HR) by the MIMO radar in real scenarios shows a strong consistency with the reference electrocardiogram (ECG).
Zhi Li; Tian Jin; Yongpeng Dai; Yongkun Song. Through-Wall Multi-Subject Localization and Vital Signs Monitoring Using UWB MIMO Imaging Radar. Remote Sensing 2021, 13, 2905 .
AMA StyleZhi Li, Tian Jin, Yongpeng Dai, Yongkun Song. Through-Wall Multi-Subject Localization and Vital Signs Monitoring Using UWB MIMO Imaging Radar. Remote Sensing. 2021; 13 (15):2905.
Chicago/Turabian StyleZhi Li; Tian Jin; Yongpeng Dai; Yongkun Song. 2021. "Through-Wall Multi-Subject Localization and Vital Signs Monitoring Using UWB MIMO Imaging Radar." Remote Sensing 13, no. 15: 2905.
Autofocusing of multiple-input and multiple-output (MIMO) penetrating radar is a recent developing method to solve the problem of image focusing problem in unknown environment. However, the ergodic search process in the subfocusing process greatly increases the amount of calculation, which limits the application of this method in practice. To solve the problem, we introduce an image-domain-filter-based method. All the compensation and correction are based on image domain, and it avoids calculating the position of the refraction point, which saves much time of computation. In this letter, we prove the image filter can autofocus well in both ground penetrating and wall penetrating scenarios under circumstances with unknown parameters. Both the simulation and the measurement experiments show the method can complete the compensation precisely and quickly and offers better focusing quality.
Zhuo Xu; Tian Jin. An Image-Domain Filter for Refraction Effects Compensation of Penetrating MIMO Imagery. IEEE Geoscience and Remote Sensing Letters 2021, PP, 1 -5.
AMA StyleZhuo Xu, Tian Jin. An Image-Domain Filter for Refraction Effects Compensation of Penetrating MIMO Imagery. IEEE Geoscience and Remote Sensing Letters. 2021; PP (99):1-5.
Chicago/Turabian StyleZhuo Xu; Tian Jin. 2021. "An Image-Domain Filter for Refraction Effects Compensation of Penetrating MIMO Imagery." IEEE Geoscience and Remote Sensing Letters PP, no. 99: 1-5.
In this article, we present a content-sensitive superpixel generation method with edge penalty and the contraction-expansion search strategy (EPCES) for synthetic aperture radar (SAR) images. Specifically, the edge information can be obtained by our previously proposed ratio-based edge detector with recurrent guidance filter, which has been proven to be robust to speckle noise and capable of detecting weak edges in low-contrast areas. The content-sensitive superpixel seeds' initialization method is proposed with respect to the heterogeneous state of the SAR imagery, benefiting from which EPCES can generate an exact number of superpixels set by the user and the fine details can be preserved well. In EPCES, a new dissimilarity with edge penalty is defined to generate the superpixels with better edge adherence. Rather than adopting the conventional clustering method based on local k-means, we propose the contraction-expansion search strategy (CES), which explicitly utilizes the continuity information contained in neighboring pixels and enforces the connectivity of the superpixel without any postprocessing step. With the aid of the CES, our proposed method can attain superpixels with low computational cost and high edge adherence. Experimental results on both synthetic and real-world SAR images verify that the proposed method consistently performs favorably against several state-of-the-art methods in terms of both quality and efficiency.
Wenbo Jing; Tian Jin; Deliang Xiang. Content-Sensitive Superpixel Generation for SAR Images With Edge Penalty and Contraction-Expansion Search Strategy. IEEE Transactions on Geoscience and Remote Sensing 2021, PP, 1 -15.
AMA StyleWenbo Jing, Tian Jin, Deliang Xiang. Content-Sensitive Superpixel Generation for SAR Images With Edge Penalty and Contraction-Expansion Search Strategy. IEEE Transactions on Geoscience and Remote Sensing. 2021; PP (99):1-15.
Chicago/Turabian StyleWenbo Jing; Tian Jin; Deliang Xiang. 2021. "Content-Sensitive Superpixel Generation for SAR Images With Edge Penalty and Contraction-Expansion Search Strategy." IEEE Transactions on Geoscience and Remote Sensing PP, no. 99: 1-15.
Human pose reconstruction has been a fundamental research in computer vision. However, existing pose reconstruction methods suffer from the problem of wall occlusion that cannot be solved by a traditional optical sensor. This article studies a novel human target pose reconstruction framework using low-frequency ultra-wideband (UWB) multiple-input multiple-output (MIMO) radar and a convolutional neural network (CNN), which is used to detect targets behind the wall. In the proposed framework, first, we use UWB MIMO radar to capture the human body information. Then, target detection and tracking are used to lock the target position, and the back-projection algorithm is adopted to construct three-dimensional (3D) images. Finally, we take the processed 3D image as input to reconstruct the 3D pose of the human target via the designed 3D CNN model. Field detection experiments and comparison results show that the proposed framework can achieve pose reconstruction of human targets behind a wall, which indicates that our research can make up for the shortcomings of optical sensors and significantly expands the application of the UWB MIMO radar system.
Yongkun Song; Tian Jin; Yongpeng Dai; Yongping Song; Xiaolong Zhou. Through-Wall Human Pose Reconstruction via UWB MIMO Radar and 3D CNN. Remote Sensing 2021, 13, 241 .
AMA StyleYongkun Song, Tian Jin, Yongpeng Dai, Yongping Song, Xiaolong Zhou. Through-Wall Human Pose Reconstruction via UWB MIMO Radar and 3D CNN. Remote Sensing. 2021; 13 (2):241.
Chicago/Turabian StyleYongkun Song; Tian Jin; Yongpeng Dai; Yongping Song; Xiaolong Zhou. 2021. "Through-Wall Human Pose Reconstruction via UWB MIMO Radar and 3D CNN." Remote Sensing 13, no. 2: 241.
The bi-frequency (high- and low) synthetic aperture radar (SAR) images cannot be directly compared due to their distinct statistical properties. To diminish their statistical difference, we manage to translate the bi-frequency SAR images into one another. Therefore, we propose a cycle-consistent conditional adversarial network to achieve the goal. The cycle-consistency criteria in the Cycle GAN and the conditional generation adversarial networks in the Pix2Pix are integrated to construct the cycle-consistent conditional adversarial network. Experiments on Ku-band and P-band SAR images validate that our method outperforms Cycle GAN and Pix2Pix.
Daquan He; Tian Jin; Yongkun Song; Chen Wu. Translation between High- and Low-frequency SAR Images using Cycle-Consistent Conditional Adversarial Network. Journal of Physics: Conference Series 2021, 1757, 012025 .
AMA StyleDaquan He, Tian Jin, Yongkun Song, Chen Wu. Translation between High- and Low-frequency SAR Images using Cycle-Consistent Conditional Adversarial Network. Journal of Physics: Conference Series. 2021; 1757 (1):012025.
Chicago/Turabian StyleDaquan He; Tian Jin; Yongkun Song; Chen Wu. 2021. "Translation between High- and Low-frequency SAR Images using Cycle-Consistent Conditional Adversarial Network." Journal of Physics: Conference Series 1757, no. 1: 012025.
Limited by the total length, the total number of the antenna units as well as their topology, the radar images always suffered from the sidelobe/grating lobe which severely impacts the quality of the radar images. In this article, a convolutional neural network (CNN)-based radar image-enhancing method is proposed. Using the original radar images as the input samples and using their corresponding ideal radar images with no sidelobe/grating lobe as the label to train the CNN. A well-trained CNN can suppress the sidelobe/grating lobe in the radar images. The structure of the specific CNN, the generation methods of the samples and the labels, the training procedure of the CNN, as well as some other detailed implementation strategies are specifically illustrated in this article. The proposed method is utilized to suppress the sidelobe/grating lobe in both the simulated and real recorded radar images. Compared to other existing methods, the proposed method is with better sidelobe/grating lobe suppressing performance and better robustness.
Yongpeng Dai; Tian Jin; Haoran Li; Yongkun Song; Jun Hu. Imaging Enhancement via CNN in MIMO Virtual Array-Based Radar. IEEE Transactions on Geoscience and Remote Sensing 2020, 59, 7449 -7458.
AMA StyleYongpeng Dai, Tian Jin, Haoran Li, Yongkun Song, Jun Hu. Imaging Enhancement via CNN in MIMO Virtual Array-Based Radar. IEEE Transactions on Geoscience and Remote Sensing. 2020; 59 (9):7449-7458.
Chicago/Turabian StyleYongpeng Dai; Tian Jin; Haoran Li; Yongkun Song; Jun Hu. 2020. "Imaging Enhancement via CNN in MIMO Virtual Array-Based Radar." IEEE Transactions on Geoscience and Remote Sensing 59, no. 9: 7449-7458.
Radar images suffer from the impact of sidelobes. Several sidelobe-suppressing methods including the convolutional neural network (CNN)-based one has been proposed. However, the point spread function (PSF) in the radar images is sometimes spatially variant and affects the performance of the CNN. We propose the spatial-variant convolutional neural network (SV-CNN) aimed at this problem. It will also perform well in other conditions when there are spatially variant features. The convolutional kernels of the CNN can detect motifs with some distinctive features and are invariant to the local position of the motifs. This makes the convolutional neural networks widely used in image processing fields such as image recognition, handwriting recognition, image super-resolution, and semantic segmentation. They also perform well in radar image enhancement. However, the local position invariant character might not be good for radar image enhancement, when features of motifs (also known as the point spread function in the radar imaging field) vary with the positions. In this paper, we proposed an SV-CNN with spatial-variant convolution kernels (SV-CK). Its function is illustrated through a special application of enhancing the radar images. After being trained using radar images with position-codings as the samples, the SV-CNN can enhance the radar images. Because the SV-CNN reads information of the local position contained in the position-coding, it performs better than the conventional CNN. The advance of the proposed SV-CNN is tested using both simulated and real radar images.
Yongpeng Dai; Tian Jin; Yongkun Song; Shilong Sun; Chen Wu. Convolutional Neural Network with Spatial-Variant Convolution Kernel. Remote Sensing 2020, 12, 2811 .
AMA StyleYongpeng Dai, Tian Jin, Yongkun Song, Shilong Sun, Chen Wu. Convolutional Neural Network with Spatial-Variant Convolution Kernel. Remote Sensing. 2020; 12 (17):2811.
Chicago/Turabian StyleYongpeng Dai; Tian Jin; Yongkun Song; Shilong Sun; Chen Wu. 2020. "Convolutional Neural Network with Spatial-Variant Convolution Kernel." Remote Sensing 12, no. 17: 2811.
Most of the existing superpixel generation methods are based on local iterative clustering. However, such methods have the following shortcomings: 1) these methods require several iterations and the number of iterations is difficult to determine and 2) the generated superpixel lacks explicit connectivity without a postprocessing step. Aiming to overcome the limitations, we propose an edge-aware superpixel generation with one iteration merging (ESOM) for synthetic aperture radar (SAR) imagery. In specific, we introduce a ratio-based edge detector with a Gaussian-shaped window to extract the edge information and an edge-aware dissimilarity is defined. Then, a new merging method termed as one iteration merging is proposed, which leverages the continuity of the adjacent pixels and ensures the connectivity of superpixel. Furthermore, instead of iterative clustering, the one iteration merging is achieved in only one iteration without determining the number of iterations and hence efficient in computation. Experiments on two real SAR images demonstrate that the proposed method yields substantially better performance than some state-of-the-art methods.
Wenbo Jing; Tian Jin; Deliang Xiang. Edge-Aware Superpixel Generation for SAR Imagery With One Iteration Merging. IEEE Geoscience and Remote Sensing Letters 2020, 18, 1600 -1604.
AMA StyleWenbo Jing, Tian Jin, Deliang Xiang. Edge-Aware Superpixel Generation for SAR Imagery With One Iteration Merging. IEEE Geoscience and Remote Sensing Letters. 2020; 18 (9):1600-1604.
Chicago/Turabian StyleWenbo Jing; Tian Jin; Deliang Xiang. 2020. "Edge-Aware Superpixel Generation for SAR Imagery With One Iteration Merging." IEEE Geoscience and Remote Sensing Letters 18, no. 9: 1600-1604.
While traditional edge detectors concentrate on modifying the shape of the window function, we consider the edge detection problem from a new perspective, and an effective recurrent guidance filter is proposed in this letter. The proposed filter is elaborately designed for edge detection tasks and aims to remove the nonedge information including speckle noise and detailed texture and preserve edge information simultaneously. We first filter the image by the proposed filter and a filtered image is obtained. Then, by using the edge detector with the Gaussian-shaped window, which was previously proposed by us and performing the postprocessing method, the edge response is extracted from the filtered image. Both objective and subjective experimental results on simulated and real synthetic aperture radar (SAR) images demonstrate that the edge detector based on the recurrent guidance filter yields better performance than the state-of-the-art edge detectors.
Wenbo Jing; Tian Jin; Deliang Xiang. SAR Image Edge Detection With Recurrent Guidance Filter. IEEE Geoscience and Remote Sensing Letters 2020, 18, 1064 -1068.
AMA StyleWenbo Jing, Tian Jin, Deliang Xiang. SAR Image Edge Detection With Recurrent Guidance Filter. IEEE Geoscience and Remote Sensing Letters. 2020; 18 (6):1064-1068.
Chicago/Turabian StyleWenbo Jing; Tian Jin; Deliang Xiang. 2020. "SAR Image Edge Detection With Recurrent Guidance Filter." IEEE Geoscience and Remote Sensing Letters 18, no. 6: 1064-1068.
A long-time coherent integration could effectively improve the detection ability of radar for maneuvering targets. Nevertheless, the Doppler ambiguity and frequency migration caused by the high speed and acceleration severely degrade the detection performance. In this regard, a novel coherent integration algorithm is proposed, particularly for maneuvering targets with Doppler ambiguity. Specifically, the acceleration is firstly estimated by scaled non-uniform fast Fourier transform (SNuFFT). Then, the scaled periodic discrete Fourier transform (SPDFT), which periodically extends the observable Doppler scope of discrete Fourier transform (DFT), is proposed to estimate the unambiguous Doppler frequency. Finally, the grating lobes are significantly suppressed via product operation, and coherent integration is achieved after phase compensation. To alleviate the computational burden and eliminate the brute force searching procedure, an efficient implementation based on chirp-z transform (CZT) is also derived. Analysis shows that the proposed algorithm achieves a good balance between computational complexity and anti-noise performance. Extensive simulations and real measured radar data are conducted to verify the proposed algorithm.
Ke Jin; Gongquan Li; Tao Lai; Tian Jin; Yongjun Zhao. A Novel Long-Time Coherent Integration Algorithm for Doppler-Ambiguous Radar Maneuvering Target Detection. IEEE Sensors Journal 2020, 20, 9394 -9407.
AMA StyleKe Jin, Gongquan Li, Tao Lai, Tian Jin, Yongjun Zhao. A Novel Long-Time Coherent Integration Algorithm for Doppler-Ambiguous Radar Maneuvering Target Detection. IEEE Sensors Journal. 2020; 20 (16):9394-9407.
Chicago/Turabian StyleKe Jin; Gongquan Li; Tao Lai; Tian Jin; Yongjun Zhao. 2020. "A Novel Long-Time Coherent Integration Algorithm for Doppler-Ambiguous Radar Maneuvering Target Detection." IEEE Sensors Journal 20, no. 16: 9394-9407.
The micro-Doppler effect is a useful signature for classifying various human behaviours. However, most micro-Doppler researches assume that only a single moving target exists during the observation. Their works lack in separating micro-motion features from multi-movers. When more than one target is present, their performance will deteriorate heavily. To address this issue, the authors design a new 3D (three-dimensional) model, range–velocity–time points, to separate and describe multi-mover micro-motions measured by the ultra-wideband radar. These 3D points contain the range–velocity–time information simultaneously. By dividing points in the 3D space instead of single Doppler domain, micro-Doppler signatures of each target can be separated effectively. Multi-people motion simulation results verify the effectiveness of the authors' method.
Hao Du; Tian Jin; Meng Li; Yongping Song; Yongpeng Dai. Detection of multi‐people micro‐motions based on range–velocity–time points. Electronics Letters 2019, 55, 1247 -1249.
AMA StyleHao Du, Tian Jin, Meng Li, Yongping Song, Yongpeng Dai. Detection of multi‐people micro‐motions based on range–velocity–time points. Electronics Letters. 2019; 55 (23):1247-1249.
Chicago/Turabian StyleHao Du; Tian Jin; Meng Li; Yongping Song; Yongpeng Dai. 2019. "Detection of multi‐people micro‐motions based on range–velocity–time points." Electronics Letters 55, no. 23: 1247-1249.
The deployment of deep neural networks in real-world radar-based human activity classification is largely hindered by both the high computational cost and the large amount of training samples. In this study, the authors propose a method to simultaneously reduce the computational burden and the number of labelled training samples. Different from previous transfer learning methods that simply prune fully-connected layers and modify the weights of the convolutional layers, they enforce filter-level sparsity in the transfer learning from ImageNet to the micro-Doppler measurements. Through the sparsity-driven transfer learning, unimportant convolutional filters can be identified and then be pruned. Therefore, a light but effective transfer learned net can be obtained. The experiments demonstrate the sparsity-driven transfer learned VGG-19 Net not only outperforms convolutional neural networks trained from scratch by nearly 10% accuracy but also gives an 11 reduction in the number of parameters and a 10 reduction in computing operations compared with the original VGG-19 Net.
Hao Du; Tian Jin; Yongping Song; Yongpeng Dai; Meng Li. Efficient human activity classification via sparsity‐driven transfer learning. IET Radar, Sonar & Navigation 2019, 13, 1741 -1746.
AMA StyleHao Du, Tian Jin, Yongping Song, Yongpeng Dai, Meng Li. Efficient human activity classification via sparsity‐driven transfer learning. IET Radar, Sonar & Navigation. 2019; 13 (10):1741-1746.
Chicago/Turabian StyleHao Du; Tian Jin; Yongping Song; Yongpeng Dai; Meng Li. 2019. "Efficient human activity classification via sparsity‐driven transfer learning." IET Radar, Sonar & Navigation 13, no. 10: 1741-1746.
Deep neural networks have shown promise in the radar-based human activity analysis application. Different from existing deep learning models that take either micro-Doppler spectrograms or range profiles as their input, the proposed method can process micromotion signatures in a 3-D way. In this letter, we first transform radar echoes into range-Doppler (RD) time points and then directly process the point sets via a designed 3-D network called the RD PointNet. In fact, our point model is a discrete representation of the motion trajectory. Through this quantitative model, we can use the 3-D network to simultaneously capture human motion profiles and temporal variations. The motion capture simulations and ultrawideband radar measurements show that the proposed framework can achieve superior classification accuracy and noise robustness when compared with image-based methods.
Hao Du; Tian Jin; Yongping Song; Yongpeng Dai; Meng Li. A Three-Dimensional Deep Learning Framework for Human Behavior Analysis Using Range-Doppler Time Points. IEEE Geoscience and Remote Sensing Letters 2019, 17, 611 -615.
AMA StyleHao Du, Tian Jin, Yongping Song, Yongpeng Dai, Meng Li. A Three-Dimensional Deep Learning Framework for Human Behavior Analysis Using Range-Doppler Time Points. IEEE Geoscience and Remote Sensing Letters. 2019; 17 (4):611-615.
Chicago/Turabian StyleHao Du; Tian Jin; Yongping Song; Yongpeng Dai; Meng Li. 2019. "A Three-Dimensional Deep Learning Framework for Human Behavior Analysis Using Range-Doppler Time Points." IEEE Geoscience and Remote Sensing Letters 17, no. 4: 611-615.
In recent years, sparsity-driven regularization and compressed sensing (CS)-based radar imaging methods have attracted significant attention. This paper provides an introduction to the fundamental concepts of this area. In addition, we will describe both sparsity-driven regularization and CS-based radar imaging methods, along with other approaches in a unified mathematical framework. This will provide readers with a systematic overview of radar imaging theories and methods from a clear mathematical viewpoint. The methods presented in this paper include the minimum variance unbiased estimation, least squares (LS) estimation, Bayesian maximum a posteriori (MAP) estimation, matched filtering, regularization, and CS reconstruction. The characteristics of these methods and their connections are also analyzed. Sparsity-driven regularization and CS based radar imaging methods represent an active research area; there are still many unsolved or open problems, such as the sampling scheme, computational complexity, sparse representation, influence of clutter, and model error compensation. We will summarize the challenges as well as recent advances related to these issues.
Jungang Yang; Tian Jin; Chao Xiao; Xiaotao Huang. Compressed Sensing Radar Imaging: Fundamentals, Challenges, and Advances. Sensors 2019, 19, 3100 .
AMA StyleJungang Yang, Tian Jin, Chao Xiao, Xiaotao Huang. Compressed Sensing Radar Imaging: Fundamentals, Challenges, and Advances. Sensors. 2019; 19 (14):3100.
Chicago/Turabian StyleJungang Yang; Tian Jin; Chao Xiao; Xiaotao Huang. 2019. "Compressed Sensing Radar Imaging: Fundamentals, Challenges, and Advances." Sensors 19, no. 14: 3100.
The movements of the human body and limbs result in unique micro-Doppler signatures, which can be exploited for classifying human activities. In this work, the authors propose a Convolutional Gated Recurrent Units Neural Network (CNN-GRU) to classify human activities of varying duration based on micro-Doppler spectrogram. Unlike conventional deep learning approaches which often treat the micro-Doppler spectrogram the same way as natural image, the authors extract local feature of micro-Doppler signatures via convolutional layer and encode temporal information with gated recurrent units. Through this unified framework, the temporal evolution of body motions within a short time can be better utilised. It avoids the resolution limitation caused by the fixed-size time window of input data and identifies human activity of duration shorter than the time window length. The experiment shows that CNN-GRU model is capable of recognising and temporally localising activity sequence contained in the spectrogram.
Hao Du; Tian Jin; Yongping Song; Yongpeng Dai. DeepActivity: a micro‐Doppler spectrogram‐based net for human behaviour recognition in bio‐radar. The Journal of Engineering 2019, 2019, 6147 -6151.
AMA StyleHao Du, Tian Jin, Yongping Song, Yongpeng Dai. DeepActivity: a micro‐Doppler spectrogram‐based net for human behaviour recognition in bio‐radar. The Journal of Engineering. 2019; 2019 (19):6147-6151.
Chicago/Turabian StyleHao Du; Tian Jin; Yongping Song; Yongpeng Dai. 2019. "DeepActivity: a micro‐Doppler spectrogram‐based net for human behaviour recognition in bio‐radar." The Journal of Engineering 2019, no. 19: 6147-6151.
In this paper, a novel linear method for shape reconstruction is proposed based on the generalized multiple measurement vectors (GMMV) model. Finite difference frequency domain (FDFD) is applied to discretized Maxwell's equations, and the contrast sources are solved iteratively by exploiting the joint sparsity as a regularized constraint. Cross validation (CV) technique is used to terminate the iterations, such that the required estimation of the noise level is circumvented. The validity is demonstrated with an excitation of transverse magnetic (TM) experimental data, and it is observed that, in the aspect of focusing performance, the GMMV-based linear method outperforms the extensively used linear sampling method (LSM).
Shilong Sun; Bert Jan Kooij; Alexander G. Yarovoy; Tian Jin. A Linear Method for Shape Reconstruction based on the Generalized Multiple Measurement Vectors Model. 2019, 1 .
AMA StyleShilong Sun, Bert Jan Kooij, Alexander G. Yarovoy, Tian Jin. A Linear Method for Shape Reconstruction based on the Generalized Multiple Measurement Vectors Model. . 2019; ():1.
Chicago/Turabian StyleShilong Sun; Bert Jan Kooij; Alexander G. Yarovoy; Tian Jin. 2019. "A Linear Method for Shape Reconstruction based on the Generalized Multiple Measurement Vectors Model." , no. : 1.
This paper focuses on the time-variant radio frequency interference (RFI) issue that ultra-wide band (UWB) through-wall radar (TWR) is faced with, and presents an iterative dual sparse recovery (IDSR) framework to combat it. The framework consists of two stages: 1) RFI estimation and detection and 2) IDSR of scattered echoes from objects and RFI signals. In the first stage, an overlapped short time Fourier transform is employed to construct and update the discrete frequency Doppler spectrum (DFDS). Then, a minimum statistic operation is conducted on the DFDS to estimate RFI signals, followed by detection via a 1-D cell-averaging constant false alarm rate detector to determinate whether RFI signals exist or not. In the second stage, a dual sparse model of the collected signals is set up, based on the fast-time frequency sparsity of RFI signals because of their narrow bands and the Doppler frequency sparsity of scattered echoes from objects because of their limited moving velocities. The alternating direction method of multipliers (ADMM) is introduced to iteratively and alternately recover RFI signals and scattered echoes from objects. Specifically, the iterative hard thresholding (IHT) method is used to complete the two sparse recovery operations. The involved two sparse dictionaries are simple and only made up of inverse discrete Fourier transform basis and independent of the received signals. The IDSR framework is able to relieve the influences of RFI signals and scattered echoes from objects on each other, and thus can reconstruct the two type signals to the maximum limit. Field experiments using a UWB TWR were carried out to verify the proposed method.
Yongping Song; Jun Hu; Tian Jin; Zhi Li; Ning Chu; Zhimin Zhou. Estimation and Mitigation of Time-Variant RFI Based on Iterative Dual Sparse Recovery in Ultra-Wide Band Through-Wall Radar. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2019, 12, 3398 -3411.
AMA StyleYongping Song, Jun Hu, Tian Jin, Zhi Li, Ning Chu, Zhimin Zhou. Estimation and Mitigation of Time-Variant RFI Based on Iterative Dual Sparse Recovery in Ultra-Wide Band Through-Wall Radar. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2019; 12 (9):3398-3411.
Chicago/Turabian StyleYongping Song; Jun Hu; Tian Jin; Zhi Li; Ning Chu; Zhimin Zhou. 2019. "Estimation and Mitigation of Time-Variant RFI Based on Iterative Dual Sparse Recovery in Ultra-Wide Band Through-Wall Radar." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12, no. 9: 3398-3411.
The automatic detection and recognition of human activities are valuable for physical security, gaming, and intelligent interface. Compared to an optical recognition system, radar is more robust to variations in lighting conditions and occlusions. The centimeter-wave ultra-wideband radar can even track human motion when the target is fully occluded from it. In this work, we propose a neural network architecture, namely segmented convolutional gated recurrent neural network (SCGRNN), to recognize human activities based on micro-Doppler spectrograms measured by the ultra-wideband radar. Unlike most existing approaches which treat the micro-Doppler spectrograms the same way as natural images, we extract segmented features of spectrograms via convolution operation and encode the feature maps along the time axis with gated recurrent units. Taking advantage of regularities in both the time and Doppler frequency domains in this way, our model can detect activities with arbitrary lengths. The experiments show that our method outperforms existing models in fine temporal resolution, noise robustness, and generalization performance. The radar system can thus recognize human behavior when visible light is blocked by opaque objects.
Hao Du; Tian Jin; Yuan He; Yongping Song; Yongpeng Dai. Segmented convolutional gated recurrent neural networks for human activity recognition in ultra-wideband radar. Neurocomputing 2019, 396, 451 -464.
AMA StyleHao Du, Tian Jin, Yuan He, Yongping Song, Yongpeng Dai. Segmented convolutional gated recurrent neural networks for human activity recognition in ultra-wideband radar. Neurocomputing. 2019; 396 ():451-464.
Chicago/Turabian StyleHao Du; Tian Jin; Yuan He; Yongping Song; Yongpeng Dai. 2019. "Segmented convolutional gated recurrent neural networks for human activity recognition in ultra-wideband radar." Neurocomputing 396, no. : 451-464.
High-resolution three-dimensional (3D) images can be acquired by the planar Multiple-Input Multiple-Output (MIMO) array radar making future work like detection and tracking easier. However, regarding portability and to save the costs of radar system, MIMO radar array adopts sparse type with limited number of antennas, so the imaging performance of a MIMO radar system is limited. In this paper, the 3D back projection imaging algorithm is verified by the experimental results of planar MIMO array for human body and an enhanced radar imaging method is proposed. The Lucy-Richardson (LR) algorithm based on deconvolution that is normally used for optical images is applied in radar images. Since the LR algorithm can amplify the noise level in a noise-contaminated system, a regularization method based on the Total Variation constraint is further incorporated in the LR algorithm to suppress the ill-posed characteristics. The proposed method shows a higher image Signal-to-Noise Ratio, a faster rate of convergence, a higher structure similarity and a smaller relative error compared to some similar methods. In the meantime, it also reduces the loss of image information after image enhancement and improves the radar image quality (get less grating lobe and clearer human limbs). The proposed method overcomes the disadvantages mentioned above and is verified by simulation experiment and real data measurement.
Dizhi Zhao; Tian Jin; Yongpeng Dai; Yongping Song; XiangChenYang Su. A Three-Dimensional Enhanced Imaging Method on Human Body for Ultra-Wideband Multiple-Input Multiple-Output Radar. Electronics 2018, 7, 101 .
AMA StyleDizhi Zhao, Tian Jin, Yongpeng Dai, Yongping Song, XiangChenYang Su. A Three-Dimensional Enhanced Imaging Method on Human Body for Ultra-Wideband Multiple-Input Multiple-Output Radar. Electronics. 2018; 7 (7):101.
Chicago/Turabian StyleDizhi Zhao; Tian Jin; Yongpeng Dai; Yongping Song; XiangChenYang Su. 2018. "A Three-Dimensional Enhanced Imaging Method on Human Body for Ultra-Wideband Multiple-Input Multiple-Output Radar." Electronics 7, no. 7: 101.