This page has only limited features, please log in for full access.

Unclaimed
Lin Zhang
School of Software Engineering, Tongji University, Shanghai, China

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 02 March 2021 in IEEE Transactions on Image Processing
Reads 0
Downloads 0

Haze-free images are the prerequisites of many vision systems and algorithms, and thus single image dehazing is of paramount importance in computer vision. In this field, prior-based methods have achieved initial success. However, they often introduce annoying artifacts to outputs because their priors can hardly fit all situations. By contrast, learning-based methods can generate more natural results. Nonetheless, due to the lack of paired foggy and clear outdoor images of the same scenes as training samples, their haze removal abilities are limited. In this work, we attempt to merge the merits of prior-based and learning-based approaches by dividing the dehazing task into two sub-tasks, i.e., visibility restoration and realness improvement. Specifically, we propose a two-stage weakly supervised dehazing framework, RefineDNet. In the first stage, RefineDNet adopts the dark channel prior to restore visibility. Then, in the second stage, it refines preliminary dehazing results of the first stage to improve realness via adversarial learning with unpaired foggy and clear images. To get more qualified results, we also propose an effective perceptual fusion strategy to blend different dehazing outputs. Extensive experiments corroborate that RefineDNet with the perceptual fusion has an outstanding haze removal capability and can also produce visually pleasing results. Even implemented with basic backbone networks, RefineDNet can outperform supervised dehazing approaches as well as other state-of-the-art methods on indoor and outdoor datasets. To make our results reproducible, relevant code and data are available at https://github.com/xiaofeng94/RefineDNet-for-dehazing .

ACS Style

Shiyu Zhao; Lin Zhang; Ying Shen; Yicong Zhou. RefineDNet: A Weakly Supervised Refinement Framework for Single Image Dehazing. IEEE Transactions on Image Processing 2021, 30, 3391 -3404.

AMA Style

Shiyu Zhao, Lin Zhang, Ying Shen, Yicong Zhou. RefineDNet: A Weakly Supervised Refinement Framework for Single Image Dehazing. IEEE Transactions on Image Processing. 2021; 30 ():3391-3404.

Chicago/Turabian Style

Shiyu Zhao; Lin Zhang; Ying Shen; Yicong Zhou. 2021. "RefineDNet: A Weakly Supervised Refinement Framework for Single Image Dehazing." IEEE Transactions on Image Processing 30, no. : 3391-3404.

Review
Published: 13 January 2021 in Symmetry
Reads 0
Downloads 0

The parking assist system is an essential application of the car’s active collision avoidance system in low-speed and complex urban environments, which has been a hot research topic in recent years. Parking space detection is an important step of the parking assistance system, and its research object is parking spaces with symmetrical structures in parking lots. By analyzing and investigating parking space information measured by the sensors, reliable detection of sufficient parking spaces can be realized. First, this article discusses the main problems in the process of detecting parking spaces, illustrating the research significance and current research status of parking space detection methods. In addition, it further introduces some parking space detection methods, including free-space-based methods, parking-space-marking-based methods, user-interface-based methods, and infrastructure-based methods, which are all under methods of parking space selection. Lastly, this article summarizes the parking space detection methods, which gives a clear direction for future research.

ACS Style

Yong Ma; Yangguo Liu; Lin Zhang; Yuanlong Cao; Shihui Guo; Hanxi Li. Research Review on Parking Space Detection Method. Symmetry 2021, 13, 128 .

AMA Style

Yong Ma, Yangguo Liu, Lin Zhang, Yuanlong Cao, Shihui Guo, Hanxi Li. Research Review on Parking Space Detection Method. Symmetry. 2021; 13 (1):128.

Chicago/Turabian Style

Yong Ma; Yangguo Liu; Lin Zhang; Yuanlong Cao; Shihui Guo; Hanxi Li. 2021. "Research Review on Parking Space Detection Method." Symmetry 13, no. 1: 128.

Journal article
Published: 04 December 2020 in Applied Sciences
Reads 0
Downloads 0

Depression is a global mental health problem, the worst cases of which can lead to self-injury or suicide. An automatic depression detection system is of great help in facilitating clinical diagnosis and early intervention of depression. In this work, we propose a new automatic depression detection method utilizing speech signals and linguistic content from patient interviews. Specifically, the proposed method consists of three components, which include a Bidirectional Long Short-Term Memory (BiLSTM) network with an attention layer to deal with linguistic content, a One-Dimensional Convolutional Neural Network (1D CNN) to deal with speech signals, and a fully connected network integrating the outputs of the previous two models to assess the depressive state. Evaluated on two publicly available datasets, our method achieves state-of-the-art performance compared with the existing methods. In addition, our method utilizes audio and text features simultaneously. Therefore, it can get rid of the misleading information provided by the patients. As a conclusion, our method can automatically evaluate the depression state and does not require an expert to conduct the psychological evaluation on site. Our method greatly improves the detection accuracy, as well as the efficiency.

ACS Style

Lin Lin; Xuri Chen; Ying Shen; Lin Zhang. Towards Automatic Depression Detection: A BiLSTM/1D CNN-Based Model. Applied Sciences 2020, 10, 8701 .

AMA Style

Lin Lin, Xuri Chen, Ying Shen, Lin Zhang. Towards Automatic Depression Detection: A BiLSTM/1D CNN-Based Model. Applied Sciences. 2020; 10 (23):8701.

Chicago/Turabian Style

Lin Lin; Xuri Chen; Ying Shen; Lin Zhang. 2020. "Towards Automatic Depression Detection: A BiLSTM/1D CNN-Based Model." Applied Sciences 10, no. 23: 8701.

Research article
Published: 23 November 2020 in Mathematical Problems in Engineering
Reads 0
Downloads 0

The quality of acquired images can be surely reduced by improper exposures. Thus, in many vision-related industries, such as imaging sensor manufacturing and video surveillance, an approach that can routinely and accurately evaluate exposure levels of images is in urgent need. Taking an image as input, such a method is expected to output a scalar value, which can represent the overall perceptual exposure level of the examined image, ranging from extremely underexposed to extremely overexposed. However, studies focusing on image exposure level assessment (IELA) are quite sporadic. It should be noted that blind NR-IQA (no-reference image quality assessment) algorithms or metrics used to measure the quality of contrast-distorted images cannot be used for IELA. The root reason is that though these algorithms can quantify quality distortion of images, they do not know whether the distortion is due to underexposure or overexposure. This paper aims to resolve the issue of IELA to some extent and contributes to two aspects. Firstly, an Image Exposure Database (IEpsD) is constructed to facilitate the study of IELA. IEpsD comprises 24,500 images with various exposure levels, and for each image a subjective exposure score is provided, which represents its perceptual exposure level. Secondly, as IELA can be naturally formulated as a regression problem, we thoroughly evaluate the performance of modern deep CNN architectures for solving this specific task. Our evaluation results can serve as a baseline when the other researchers develop even more sophisticated IELA approaches. To facilitate the other researchers to reproduce our results, we have released the dataset and the relevant source code at https://cslinzhang.github.io/imgExpo/.

ACS Style

Lin Zhang; Xilin Yang; Xiao Liu; Shengjie Zhao; Yong Ma. Towards Automatic Image Exposure Level Assessment. Mathematical Problems in Engineering 2020, 2020, 1 -14.

AMA Style

Lin Zhang, Xilin Yang, Xiao Liu, Shengjie Zhao, Yong Ma. Towards Automatic Image Exposure Level Assessment. Mathematical Problems in Engineering. 2020; 2020 ():1-14.

Chicago/Turabian Style

Lin Zhang; Xilin Yang; Xiao Liu; Shengjie Zhao; Yong Ma. 2020. "Towards Automatic Image Exposure Level Assessment." Mathematical Problems in Engineering 2020, no. : 1-14.

Journal article
Published: 26 August 2020 in IEEE Transactions on Multimedia
Reads 0
Downloads 0

In this paper, we propose a multifeature learning method to jointly learn compact multifeature codes (LCMFCs) for palmprint recognition with a single training sample per palm. Unlike most existing hand-crafted methods that extract single-type features from raw pixels, we first form the multi-type data vectors such as the direction-data and texture-data to completely sample the multiple information of a palmprint image. Then, we learn the discriminative multifeatures from multi-type data vectors by maximizing the inter-palm distance and minimizing the energy loss between the learned codes and the original data. Moreover, our LCMFC method adaptively learns the optimal weights of multi-type features to jointly learn the compact multifeature codes. Finally, we cluster the nonoverlapping blockwise histograms of the compact multifeature codes into a feature vector for palmprint representation. Extensive experimental results on six benchmark palmprint databases are presented to show the effectiveness of the proposed method.

ACS Style

Lunke Fei; Bob Zhang; Lin Zhang; Wei Jia; Jie Wen; Jigang Wu. Learning Compact Multifeature Codes for Palmprint Recognition From a Single Training Image per Palm. IEEE Transactions on Multimedia 2020, 23, 2930 -2942.

AMA Style

Lunke Fei, Bob Zhang, Lin Zhang, Wei Jia, Jie Wen, Jigang Wu. Learning Compact Multifeature Codes for Palmprint Recognition From a Single Training Image per Palm. IEEE Transactions on Multimedia. 2020; 23 (99):2930-2942.

Chicago/Turabian Style

Lunke Fei; Bob Zhang; Lin Zhang; Wei Jia; Jie Wen; Jigang Wu. 2020. "Learning Compact Multifeature Codes for Palmprint Recognition From a Single Training Image per Palm." IEEE Transactions on Multimedia 23, no. 99: 2930-2942.

Research article
Published: 02 July 2020 in Mathematical Problems in Engineering
Reads 0
Downloads 0

We investigate how to correct exposure of underexposed images. The bottleneck of previous methods mainly lies in their naturalness and robustness when dealing with images with various exposure levels. When facing well-exposed or extremely underexposed images, they may produce over- or underenhanced outputs. In this paper, we propose a novel retinex-based approach, namely, LiAR (short for lightness-aware restorer). The word “lightness-aware” refers to that the estimated illumination not only is a component to be adjusted but also serves as a measure that reflects the brightness of the scene, determining the degree of adjustment. In this way, underexposed images can be restored adaptively according to their own brightness. Given an image, LiAR first estimates its illumination map using a specially designed loss function which can ensure the result’s color consistency and texture richness. Then adaptive correction is performed to get properly exposed output. LiAR is based on internal optimization of the single test image and does not need any prior training, implying that it can adapt itself to different settings per image. Additionally, LiAR can be easily extended to the video case due to its simplicity and stability. Experiments demonstrate that facing images/videos with various exposure levels, LiAR can achieve robust and real-time correction with high contrast and naturalness. The relevant code and collected data are publicly available at https://cslinzhang.github.io/LiAR-Homepage/.

ACS Style

Lin Zhang; Anqi Zhu; Ying Shen; Shengjie Zhao; Huijuan Zhang. Revisit Retinex Theory: Towards a Lightness-Aware Restorer for Underexposed Images. Mathematical Problems in Engineering 2020, 2020, 1 -11.

AMA Style

Lin Zhang, Anqi Zhu, Ying Shen, Shengjie Zhao, Huijuan Zhang. Revisit Retinex Theory: Towards a Lightness-Aware Restorer for Underexposed Images. Mathematical Problems in Engineering. 2020; 2020 ():1-11.

Chicago/Turabian Style

Lin Zhang; Anqi Zhu; Ying Shen; Shengjie Zhao; Huijuan Zhang. 2020. "Revisit Retinex Theory: Towards a Lightness-Aware Restorer for Underexposed Images." Mathematical Problems in Engineering 2020, no. : 1-11.

Journal article
Published: 22 May 2020 in IEEE Transactions on Image Processing
Reads 0
Downloads 0

On benchmark images, modern dehazing methods are able to achieve very comparable results whose differences are too subtle for people to qualitatively judge. Thus, it is imperative to adopt quantitative evaluation on a vast number of hazy images. However, existing quantitative evaluation schemes are not convincing due to a lack of appropriate datasets and poor correlations between metrics and human perceptions. In this work, we attempt to address these issues, and we make two contributions. First, we establish two benchmark datasets, i.e., the BEnchmark Dataset for Dehazing Evaluation (BeDDE) and the EXtension of the BeDDE (exBeDDE), which had been lacking for a long period of time. The BeDDE is used to evaluate dehazing methods via full reference image quality assessment (FR-IQA) metrics. It provides hazy images, clear references, haze level labels, and manually labeled masks that indicate the regions of interest (ROIs) in image pairs. The exBeDDE is used to assess the performance of dehazing evaluation metrics. It provides extra dehazed images and subjective scores from people. To the best of our knowledge, the BeDDE is the first dehazing dataset whose image pairs were collected in natural outdoor scenes without any simulation. Second, we provide a new insight that dehazing involves two separate aspects, i.e., visibility restoration and realness restoration, which should be evaluated independently; thus, to characterize them, we establish two criteria, i.e., the visibility index (VI) and the realness index (RI), respectively. The effectiveness of the criteria is verified through extensive experiments. Furthermore, 14 representative dehazing methods are evaluated as baselines using our criteria on BeDDE. Our datasets and relevant code are available at https://github.com/xiaofeng94/BeDDE-for-defogging.

ACS Style

Shiyu Zhao; Lin Zhang; Shuaiyi Huang; Ying Shen; Shengjie Zhao. Dehazing Evaluation: Real-World Benchmark Datasets, Criteria, and Baselines. IEEE Transactions on Image Processing 2020, 29, 6947 -6962.

AMA Style

Shiyu Zhao, Lin Zhang, Shuaiyi Huang, Ying Shen, Shengjie Zhao. Dehazing Evaluation: Real-World Benchmark Datasets, Criteria, and Baselines. IEEE Transactions on Image Processing. 2020; 29 (99):6947-6962.

Chicago/Turabian Style

Shiyu Zhao; Lin Zhang; Shuaiyi Huang; Ying Shen; Shengjie Zhao. 2020. "Dehazing Evaluation: Real-World Benchmark Datasets, Criteria, and Baselines." IEEE Transactions on Image Processing 29, no. 99: 6947-6962.

Journal article
Published: 18 February 2020 in Mathematical Problems in Engineering
Reads 0
Downloads 0

Single image super-resolution (SISR) has been a very attractive research topic in recent years. Breakthroughs in SISR have been achieved due to deep learning and generative adversarial networks (GANs). However, the generated image still suffers from undesired artifacts. In this paper, we propose a new method named GMGAN for SISR tasks. In this method, to generate images more in line with human vision system (HVS), we design a quality loss by integrating an image quality assessment (IQA) metric named gradient magnitude similarity deviation (GMSD). To our knowledge, it is the first time to truly integrate an IQA metric into SISR. Moreover, to overcome the instability of the original GAN, we use a variant of GANs named improved training of Wasserstein GANs (WGAN-GP). Besides GMGAN, we highlight the importance of training datasets. Experiments show that GMGAN with quality loss and WGAN-GP can generate visually appealing results and set a new state of the art. In addition, large quantity of high-quality training images with rich textures can benefit the results.

ACS Style

Xining Zhu; Lin Zhang; Xiao Liu; Ying Shen; Shengjie Zhao. GAN-Based Image Super-Resolution with a Novel Quality Loss. Mathematical Problems in Engineering 2020, 2020, 1 -12.

AMA Style

Xining Zhu, Lin Zhang, Xiao Liu, Ying Shen, Shengjie Zhao. GAN-Based Image Super-Resolution with a Novel Quality Loss. Mathematical Problems in Engineering. 2020; 2020 ():1-12.

Chicago/Turabian Style

Xining Zhu; Lin Zhang; Xiao Liu; Ying Shen; Shengjie Zhao. 2020. "GAN-Based Image Super-Resolution with a Novel Quality Loss." Mathematical Problems in Engineering 2020, no. : 1-12.

Conference paper
Published: 02 June 2019 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

Depth estimation from a single image is of paramount importance in various vision tasks, such as obstacle detection, robot navigation, 3D reconstruction, etc. However, how to get an accurate depth map with clear details and a fine resolution remains an unresolved issue. As an attempt to solve this problem, we propose a novel CNN-based approach, namely \(MSCN_{NS}\), which involves multi-scale sub-pixel convolutions and a neighborhood smoothness constraint. Specifically, \(MSCN_{NS}\) makes use of sub-pixel convolutions which fuse multi-scale features from different branches of the network to retrieve a high resolution depth map with fine details of the scene. Furthermore, \(MSCN_{NS}\) incorporates a neighborhood smoothness regularization term to make sure that spatially closer pixels with similar features would have close depth values. The effectiveness and efficiency of \(MSCN_{NS}\) have been corroborated through extensive experiments conducted on benchmark datasets.

ACS Style

Shiyu Zhao; Lin Zhang; Ying Shen; Yongning Zhu. A CNN-Based Depth Estimation Approach with Multi-scale Sub-pixel Convolutions and a Smoothness Constraint. Transactions on Petri Nets and Other Models of Concurrency XV 2019, 365 -380.

AMA Style

Shiyu Zhao, Lin Zhang, Ying Shen, Yongning Zhu. A CNN-Based Depth Estimation Approach with Multi-scale Sub-pixel Convolutions and a Smoothness Constraint. Transactions on Petri Nets and Other Models of Concurrency XV. 2019; ():365-380.

Chicago/Turabian Style

Shiyu Zhao; Lin Zhang; Ying Shen; Yongning Zhu. 2019. "A CNN-Based Depth Estimation Approach with Multi-scale Sub-pixel Convolutions and a Smoothness Constraint." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 365-380.

Research article
Published: 07 February 2019 in Mathematical Problems in Engineering
Reads 0
Downloads 0

Biometrics based personal authentication has been found to be an effective method for recognizing, with high confidence, a person’s identity. With the emergence of reliable and inexpensive 3D scanners, recent years have witnessed a growing interest in developing 3D biometrics systems. As a commonsense, matching algorithms are crucial for such systems. In this paper, we focus on investigating identification methods for two specific 3D biometric identifiers, 3D ear and 3D palmprint. Specifically, we propose a Multi-Dictionary based Collaborative Representation (MDCR) framework for classification, which can reduce the negative effects aroused by some local regions. With MDCR, a range map is partitioned into overlapping blocks and, from each block, a feature vector is extracted. At the dictionary construction stage, feature vectors from blocks having the same locations in gallery samples can form a dictionary and, accordingly, multiple dictionaries are obtained. Given a probe sample, by coding its each feature vector on the corresponding dictionary, multiple class labels can be obtained and then we use a simple majority-based voting scheme to make the final decision. In addition, a novel patch-wise and statistics-based feature extraction scheme is proposed, combining the range image’s local surface type information and local dominant orientation information. The effectiveness of the proposed approach has been corroborated by extensive experiments conducted on two large-scale and widely-used benchmark datasets, the UND Collection J2 3D ear dataset and the PolyU 3D palmprint dataset. To make the results reproducible, we have publicly released the source code.

ACS Style

Lin Zhang; Xining Zhu; Lida Li. Multi-Dictionary Based Collaborative Representation: With Applications to 3D Ear and 3D Palmprint Identification. Mathematical Problems in Engineering 2019, 2019, 1 -13.

AMA Style

Lin Zhang, Xining Zhu, Lida Li. Multi-Dictionary Based Collaborative Representation: With Applications to 3D Ear and 3D Palmprint Identification. Mathematical Problems in Engineering. 2019; 2019 ():1-13.

Chicago/Turabian Style

Lin Zhang; Xining Zhu; Lida Li. 2019. "Multi-Dictionary Based Collaborative Representation: With Applications to 3D Ear and 3D Palmprint Identification." Mathematical Problems in Engineering 2019, no. : 1-13.

Journal article
Published: 23 January 2019 in IEEE Access
Reads 0
Downloads 0

Depth estimation from a monocular image is of paramount importance in various vision tasks, such as obstacle detection, robot navigation, 3D reconstruction. However, how to get an accurate depth map with clear details and a fine resolution remains an unresolved issue. As an attempt to solve this problem, we exploit image super-resolution concepts and techniques for monocular depth estimation and propose a novel CNN-based approach, namely MSCNNS, which involves multi-scale sub-pixel convolutions and a neighborhood smoothness constraint. Specifically, MSCNNS makes use of sub-pixel convolutions with multi-scale fusions to retrieve a high resolution depth map with fine details of the scene. Different from previous multi-scale fusion strategies, those multi-scale features come from supervised scale branches of the network. Furthermore,MSCNNS incorporates a neighborhood smoothness regularization term to make sure that spatially closer pixels with similar features would have close depth values. The effectiveness and efficiency of MSCNNS have been corroborated through extensive experiments conducted on benchmark datasets.

ACS Style

Shiyu Zhao; Lin Zhang; Ying Shen; Shengjie Zhao; Huijuan Zhang. Super-Resolution for Monocular Depth Estimation With Multi-Scale Sub-Pixel Convolutions and a Smoothness Constraint. IEEE Access 2019, 7, 16323 -16335.

AMA Style

Shiyu Zhao, Lin Zhang, Ying Shen, Shengjie Zhao, Huijuan Zhang. Super-Resolution for Monocular Depth Estimation With Multi-Scale Sub-Pixel Convolutions and a Smoothness Constraint. IEEE Access. 2019; 7 (99):16323-16335.

Chicago/Turabian Style

Shiyu Zhao; Lin Zhang; Ying Shen; Shengjie Zhao; Huijuan Zhang. 2019. "Super-Resolution for Monocular Depth Estimation With Multi-Scale Sub-Pixel Convolutions and a Smoothness Constraint." IEEE Access 7, no. 99: 16323-16335.

Journal article
Published: 18 July 2018 in IEEE Transactions on Image Processing
Reads 0
Downloads 0

In the automobile industry, recent years have witnessed a growing interest in developing self-parking systems. For such systems, how to accurately and efficiently detect and localize the parking slots defined by regular line segments near the vehicle is a key and still unresolved issue. In fact, kinds of unfavorable factors, such as the diversity of ground materials, changes in illumination conditions, and unpredictable shadows caused by nearby trees, make the vision-based parking-slot detection much harder than it looks. In this paper, we attempt to solve this issue to some extent and our contributions are twofold. First, we propose a novel deep convolutional neural network (DCNN)-based parking-slot detection approach, namely, DeepPS, which takes the surround-view image as the input. There are two key steps in DeepPS, identifying all the marking points on the input image and classifying local image patterns formed by pairs of marking points. We formulate both of them as learning problems, which can be solved naturally by modern DCNN models. Second, to facilitate the study of vision-based parking-slot detection, a large-scale labeled dataset is established. This dataset is the largest in this field, comprising 12 165 surround-view images collected from typical indoor and outdoor parking sites. For each image, the marking points and parking slots are carefully labeled. The efficacy and efficiency of DeepPS have been corroborated on our collected dataset. To make our results fully reproducible, all the relevant source codes and the dataset have been made publicly available at https://cslinzhang.github.io/deepps/.

ACS Style

Lin Zhang; Junhao Huang; Xiyuan Li; Lu Xiong. Vision-Based Parking-Slot Detection: A DCNN-Based Approach and a Large-Scale Benchmark Dataset. IEEE Transactions on Image Processing 2018, 27, 5350 -5364.

AMA Style

Lin Zhang, Junhao Huang, Xiyuan Li, Lu Xiong. Vision-Based Parking-Slot Detection: A DCNN-Based Approach and a Large-Scale Benchmark Dataset. IEEE Transactions on Image Processing. 2018; 27 (11):5350-5364.

Chicago/Turabian Style

Lin Zhang; Junhao Huang; Xiyuan Li; Lu Xiong. 2018. "Vision-Based Parking-Slot Detection: A DCNN-Based Approach and a Large-Scale Benchmark Dataset." IEEE Transactions on Image Processing 27, no. 11: 5350-5364.

Journal article
Published: 21 March 2018 in Symmetry
Reads 0
Downloads 0

Among the members of biometric identifiers, the palmprint and the palmvein have received significant attention due to their stability, uniqueness, and non-intrusiveness. In this paper, we investigate the problem of palmprint/palmvein recognition and propose a Deep Convolutional Neural Network (DCNN) based scheme, namely PalmRCNN (short for palmprint/palmvein recognition using CNNs). The effectiveness and efficiency of PalmRCNN have been verified through extensive experiments conducted on benchmark datasets. In addition, though substantial effort has been devoted to palmvein recognition, it is still quite difficult for the researchers to know the potential discriminating capability of the contactless palmvein. One of the root reasons is that a large-scale and publicly available dataset comprising high-quality, contactless palmvein images is still lacking. To this end, a user-friendly acquisition device for collecting high quality contactless palmvein images is at first designed and developed in this work. Then, a large-scale palmvein image dataset is established, comprising 12,000 images acquired from 600 different palms in two separate collection sessions. The collected dataset now is publicly available.

ACS Style

Lin Zhang; Zaixi Cheng; Ying Shen; Dongqing Wang. Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset. Symmetry 2018, 10, 78 .

AMA Style

Lin Zhang, Zaixi Cheng, Ying Shen, Dongqing Wang. Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset. Symmetry. 2018; 10 (4):78.

Chicago/Turabian Style

Lin Zhang; Zaixi Cheng; Ying Shen; Dongqing Wang. 2018. "Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset." Symmetry 10, no. 4: 78.

Journal article
Published: 13 March 2018 in Symmetry
Reads 0
Downloads 0

Recent years have witnessed a growing interest in developing automatic parking systems in the field of intelligent vehicles. However, how to effectively and efficiently locating parking-slots using a vision-based system is still an unresolved issue. Even more seriously, there is no publicly available labeled benchmark dataset for tuning and testing parking-slot detection algorithms. In this paper, we attempt to fill the above-mentioned research gaps to some extent and our contributions are twofold. Firstly, to facilitate the study of vision-based parking-slot detection, a large-scale parking-slot image database is established. This database comprises 8600 surround-view images collected from typical indoor and outdoor parking sites. For each image in this database, the marking-points and parking-slots are carefully labeled. Such a database can serve as a benchmark to design and validate parking-slot detection algorithms. Secondly, a learning-based parking-slot detection approach, namely PSDL, is proposed. Using PSDL, given a surround-view image, the marking-points will be detected first and then the valid parking-slots can be inferred. The efficacy and efficiency of PSDL have been corroborated on our database. It is expected that PSDL can serve as a baseline when the other researchers develop more sophisticated methods.

ACS Style

Lin Zhang; Xiyuan Li; Junhao Huang; Ying Shen; Dongqing Wang. Vision-Based Parking-Slot Detection: A Benchmark and A Learning-Based Approach. Symmetry 2018, 10, 64 .

AMA Style

Lin Zhang, Xiyuan Li, Junhao Huang, Ying Shen, Dongqing Wang. Vision-Based Parking-Slot Detection: A Benchmark and A Learning-Based Approach. Symmetry. 2018; 10 (3):64.

Chicago/Turabian Style

Lin Zhang; Xiyuan Li; Junhao Huang; Ying Shen; Dongqing Wang. 2018. "Vision-Based Parking-Slot Detection: A Benchmark and A Learning-Based Approach." Symmetry 10, no. 3: 64.

Conference paper
Published: 28 October 2017 in Computational and Corpus-Based Phraseology
Reads 0
Downloads 0

Many institutions, such as banks, usually require their customers to provide face images under proper illumination conditions. For some remote systems, a method that can automatically and objectively evaluate the illumination quality of a face image in a human-like manner is highly desired. However, few studies have been conducted in this area. To fill this research gap to some extent, we make two contributions in this paper. Firstly, in order to facilitate the study of illumination quality prediction for face images, a large-scale database, namely, Face Image Illumination Quality Database (FIIQD), is established. FIIQD contains 224,733 face images with various illumination patterns and for each image there is an associated illumination quality score. Secondly, based on deep convolutional neural networks (DCNN), a novel highly accurate model for predicting the illumination quality of face images is proposed. To make our results reproducible, the database and the source codes have been made publicly available at https://github.com/zhanglijun95/FIIQA.

ACS Style

Lijun Zhang; Lida Li. Illumination Quality Assessment for Face Images: A Benchmark and a Convolutional Neural Networks Based Model. Computational and Corpus-Based Phraseology 2017, 583 -593.

AMA Style

Lijun Zhang, Lida Li. Illumination Quality Assessment for Face Images: A Benchmark and a Convolutional Neural Networks Based Model. Computational and Corpus-Based Phraseology. 2017; ():583-593.

Chicago/Turabian Style

Lijun Zhang; Lida Li. 2017. "Illumination Quality Assessment for Face Images: A Benchmark and a Convolutional Neural Networks Based Model." Computational and Corpus-Based Phraseology , no. : 583-593.

Journal article
Published: 01 September 2017 in Pattern Recognition
Reads 0
Downloads 0

A novel device is designed and developed for capturing contactless palmprint images.A large-scale contactless palmprint image dataset is established.The quality of collected images is analyzed using modern image quality assessment metrics.For contactless palmprint identification, a CR-based approach is proposed, which is highly effective and efficient. Biometric authentication has been found to be an effective method for recognizing a persons identity with a high confidence. In this field, the use of palmprint represents a recent trend. To make the palmprint-based recognition systems more user-friendly and sanitary, researchers have been investigating how to design such systems in a contactless manner. Though substantial effort has been devoted to this area, it is still not quite clear about the discriminant power of the contactless palmprint, mainly owing to lack of a public, large-scale, and high-quality benchmark dataset collected using a well-designed device. As an attempt to fill this gap, we have at first developed a highly user-friendly device for capturing high-quality contactless palmprint images. Then, with the developed device, a large-scale palmprint image dataset is established, comprising 12,000 images collected from 600 different palms in two separate sessions. To the best of our knowledge, it is the largest contactless palmprint image benchmark dataset ever collected. Besides, for the first time, the quality of collected images is analyzed using modern image quality assessment metrics. Furthermore, for contactless palmprint identification, we have proposed a novel approach, namely CR_CompCode, which can achieve high recognition accuracy while having an extremely low computational complexity. To make the results fully reproducible, the collected dataset and the related source codes are publicly available at http://sse.tongji.edu.cn/linzhang/contactlesspalm/index.htm.

ACS Style

Lin Zhang; Lida Li; Anqi Yang; Ying Shen; Meng Yang. Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognition 2017, 69, 199 -212.

AMA Style

Lin Zhang, Lida Li, Anqi Yang, Ying Shen, Meng Yang. Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognition. 2017; 69 ():199-212.

Chicago/Turabian Style

Lin Zhang; Lida Li; Anqi Yang; Ying Shen; Meng Yang. 2017. "Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach." Pattern Recognition 69, no. : 199-212.

Conference paper
Published: 20 July 2017 in Computer Vision
Reads 0
Downloads 0

RNA structural motifs are recurrent structural elements occurring in RNA molecules. They play essential roles in consolidating RNA tertiary structures and in binding proteins. Recently, we discovered a new type of RNA structural motif, namely the hasp motif, from 27 RNA molecules. The hasp motif comprises three nucleotides which form a structure similar to a hasp. Two consecutive nucleotides in the motif come from a double helix and the third one comes from a remote stand. The hasp motif makes two helices approach each other, which leads to RNA structure folding. All the identified hasp motifs reveal a consensus structural pattern although their sequences are not conserved. Hasp motifs are observed to reside both inside and on the surface of RNA molecules. Those inside RNA molecules help consolidate RNA tertiary structures while the others locating on the surface are evidenced to interact with proteins. The wide existence of hasp motifs indicates that hasp motifs are quite essential in both keeping RNA structures’ stableness and helping RNA perform their functions in biological processes.

ACS Style

Ying Shen; Lin Zhang. The Hasp Motif: A New Type of RNA Tertiary Interactions. Computer Vision 2017, 441 -453.

AMA Style

Ying Shen, Lin Zhang. The Hasp Motif: A New Type of RNA Tertiary Interactions. Computer Vision. 2017; ():441-453.

Chicago/Turabian Style

Ying Shen; Lin Zhang. 2017. "The Hasp Motif: A New Type of RNA Tertiary Interactions." Computer Vision , no. : 441-453.

Conference paper
Published: 01 July 2017 in 2017 IEEE International Conference on Multimedia and Expo (ICME)
Reads 0
Downloads 0

Recent years have witnessed a growing interest in developing automatic parking systems in the field of intelligent vehicle. However, how to effectively and efficiently locating parking-slots using a vision-based system is still an unresolved issue. In this paper, we attempt to fill this research gap to some extent and our contributions are twofold. Firstly, to facilitate the study of vision-based parking-slot detection, a large-scale parking-slot image database is established. For each image in this database, the marking-points and parking-slots are carefully labelled. Such a database can serve as a benchmark to design and validate parking-slot detection algorithms. Secondly, a learning based parking-slot detection approach is proposed. With this approach, given a test image, the marking-points will be detected at first and then the valid parking-slots can be inferred. Its efficacy and efficiency have been corroborated on our database. The labeled database and the source codes are publicly available at http://sse.tongji.edu.cn/linzhang/ps/index.htm.

ACS Style

Linshen Li; Lin Zhang; Xiyuan Li; Xiao Liu; Ying Shen; Lu Xiong. Vision-based parking-slot detection: A benchmark and a learning-based approach. 2017 IEEE International Conference on Multimedia and Expo (ICME) 2017, 649 -654.

AMA Style

Linshen Li, Lin Zhang, Xiyuan Li, Xiao Liu, Ying Shen, Lu Xiong. Vision-based parking-slot detection: A benchmark and a learning-based approach. 2017 IEEE International Conference on Multimedia and Expo (ICME). 2017; ():649-654.

Chicago/Turabian Style

Linshen Li; Lin Zhang; Xiyuan Li; Xiao Liu; Ying Shen; Lu Xiong. 2017. "Vision-based parking-slot detection: A benchmark and a learning-based approach." 2017 IEEE International Conference on Multimedia and Expo (ICME) , no. : 649-654.

Journal article
Published: 01 March 2017 in Neurocomputing
Reads 0
Downloads 0
ACS Style

Lin Zhang; Qingjun Liang; Ying Shen; Meng Yang; Feng Liu. Image set classification based on synthetic examples and reverse training. Neurocomputing 2017, 228, 3 -10.

AMA Style

Lin Zhang, Qingjun Liang, Ying Shen, Meng Yang, Feng Liu. Image set classification based on synthetic examples and reverse training. Neurocomputing. 2017; 228 ():3-10.

Chicago/Turabian Style

Lin Zhang; Qingjun Liang; Ying Shen; Meng Yang; Feng Liu. 2017. "Image set classification based on synthetic examples and reverse training." Neurocomputing 228, no. : 3-10.

Journal article
Published: 20 February 2017 in Journal of the Optical Society of America A
Reads 0
Downloads 0

In this paper, we propose a salient object detection algorithm that considers both background and foreground cues. It integrates both coarse salient region extraction and a top-down background weight map measure via boundary label propagation into a unified optimization framework to acquire a refined salient map. The coarse saliency map is additionally fused by three prior components: a local contrast map with greater alignment to physiological law, a global focus prior map, and a global color prior map. During the formation of the background weight map, we first construct an affinity matrix and select nodes existing on the border as labels to represent the background. Then we perform a propagation to generate the regional background weight map. Our proposed model was verified on four benchmark datasets, and the experimental results demonstrate that our method has excellent performance.

ACS Style

Qiangqiang Zhou; Lin Zhang; Weidong Zhao; Xianhui Liu; Yufei Chen; Zhicheng Wang. Salient object detection using coarse-to-fine processing. Journal of the Optical Society of America A 2017, 34, 370 -383.

AMA Style

Qiangqiang Zhou, Lin Zhang, Weidong Zhao, Xianhui Liu, Yufei Chen, Zhicheng Wang. Salient object detection using coarse-to-fine processing. Journal of the Optical Society of America A. 2017; 34 (3):370-383.

Chicago/Turabian Style

Qiangqiang Zhou; Lin Zhang; Weidong Zhao; Xianhui Liu; Yufei Chen; Zhicheng Wang. 2017. "Salient object detection using coarse-to-fine processing." Journal of the Optical Society of America A 34, no. 3: 370-383.