This page has only limited features, please log in for full access.

Unclaimed
Chunsheng Liu
School of Control Science and Engineering, Shandong University, Jinan, China

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 28 June 2021 in IEEE Access
Reads 0
Downloads 0

Although person re-identification has made great progress, unsupervised cross-domain adaptive person re-identification is still a challenging problem. With no labeled data in target domain, the performance may have a significant drop. In this paper, we propose an unsupervised cross-domain adaptive person re-identification framework based on horizontal pyramid similarity learning (UHPS). Firstly, horizontal pyramid features are extracted by dividing the deep feature maps into different number of partial feature bins. These feature bins with diverse scales can incorporate not only the global information but also local information in different spatial scales, making the framework more robust in complex environment. Then, horizontal pyramid similarity learning is proposed with the mechanism of fusing together the internal similarity of the target domain and the similarity between the source domain and target domain. Finally, the unsupervised clustering algorithm DBSCAN embeded with the horizontal pyramid similarity is employed to select training data in the target domain and estimate the pseudo labels in each training iteration, for the purpose of adapting the framework to the target domain. The results on Market1501 and DukeMTMC-reID confirm that the proposed framework can adapt to the target domain effectively and outperforms the state-of-the-art unsupervised cross domain person re-identification approaches.

ACS Style

Wenhui Dong; Peishu Qu; Chunsheng Liu; Yanke Tang; Ning Gai. Unsupervised Horizontal Pyramid Similarity Learning for Cross-domain Adaptive Person Re-identification. IEEE Access 2021, 9, 1 -1.

AMA Style

Wenhui Dong, Peishu Qu, Chunsheng Liu, Yanke Tang, Ning Gai. Unsupervised Horizontal Pyramid Similarity Learning for Cross-domain Adaptive Person Re-identification. IEEE Access. 2021; 9 ():1-1.

Chicago/Turabian Style

Wenhui Dong; Peishu Qu; Chunsheng Liu; Yanke Tang; Ning Gai. 2021. "Unsupervised Horizontal Pyramid Similarity Learning for Cross-domain Adaptive Person Re-identification." IEEE Access 9, no. : 1-1.

Journal article
Published: 06 April 2021 in Neurocomputing
Reads 0
Downloads 0

Compared with the classic object detection problem, detecting objects in aerial images has some special challenges including huge orientation variations, complicated and large background, and wide multi-scale distribution. Considering these three challenges together, we propose a novel arbitrary-oriented object detection framework consisting of three main parts. Firstly, the Cascading Attention Network (CA-Net) composed of a patching self-attention module and a supervised spatial attention module is proposed for enhancing the feature representations from objects of interest and suppressing the background noises in Feature Pyramid Network (FPN) from coarse to fine. Then, the Adaptive Feature Concatenate Network (AFC-Net) is proposed to adaptively stack the feature maps pooled from all FPN levels as well as the global semantic features, for dealing with the multi-scale change of objects. Lastly, the OBB Multi-Definition and Selection Strategy (OBB-MDS-Strategy) is proposed to regress rotated bounding boxes more smoothly and detect oriented objects more accurately in the training process. Our experiments are conducted on two common and challenging aerial datasets, i.e., DOTA and HRSC2016. Experiments results show that the proposed method has superior performances in multi-orientated objects detection compared with the representative methods.

ACS Style

Luchang Chen; Chunsheng Liu; Faliang Chang; Shuang Li; Zhaoying Nie. Adaptive multi-level feature fusion and attention-based network for arbitrary-oriented object detection in remote sensing imagery. Neurocomputing 2021, 451, 67 -80.

AMA Style

Luchang Chen, Chunsheng Liu, Faliang Chang, Shuang Li, Zhaoying Nie. Adaptive multi-level feature fusion and attention-based network for arbitrary-oriented object detection in remote sensing imagery. Neurocomputing. 2021; 451 ():67-80.

Chicago/Turabian Style

Luchang Chen; Chunsheng Liu; Faliang Chang; Shuang Li; Zhaoying Nie. 2021. "Adaptive multi-level feature fusion and attention-based network for arbitrary-oriented object detection in remote sensing imagery." Neurocomputing 451, no. : 67-80.

Research article
Published: 01 December 2020 in IET Intelligent Transport Systems
Reads 0
Downloads 0

The vision-based traffic flow parameter estimation is a challenging problem especially for dense traffic scenes, due to the difficulties of occlusion, small-size and dense traffic etc. Yet, previous methods mainly use detection and tracking methods to do vehicle counting in non-dense traffic scenes and few of them further estimate traffic flow parameters in dense traffic scenes. A framework is proposed to count vehicles and estimate traffic flow parameters in dense traffic scenes. First, a pyramid-YOLO network is proposed for detecting vehicles in dense scenes, which can effectively detect small-size and occluded vehicles. Second, the authors design a line of interest counting method based on restricted multi-tracking, which counts vehicles crossing a counting line at a certain time duration. The proposed tracking method tracks short-term vehicle trajectories near the counting line and analyses the trajectories, thus improving tracking and counting accuracy. Third, based on the detection and counting results, an estimation model is proposed to estimate traffic flow parameters of volume, speed and density. The evaluation experiments on the databases with dense traffic scenes show that the proposed framework can efficiently count vehicles and estimate traffic flow parameters with high accuracy and outperforms the representative estimation methods in comparison.

ACS Style

Shuang Li; Faliang Chang; Chunsheng Liu; Nanjun Li. Vehicle counting and traffic flow parameter estimation for dense traffic scenes. IET Intelligent Transport Systems 2020, 14, 1517 -1523.

AMA Style

Shuang Li, Faliang Chang, Chunsheng Liu, Nanjun Li. Vehicle counting and traffic flow parameter estimation for dense traffic scenes. IET Intelligent Transport Systems. 2020; 14 (12):1517-1523.

Chicago/Turabian Style

Shuang Li; Faliang Chang; Chunsheng Liu; Nanjun Li. 2020. "Vehicle counting and traffic flow parameter estimation for dense traffic scenes." IET Intelligent Transport Systems 14, no. 12: 1517-1523.

Journal article
Published: 23 June 2020 in IEEE Transactions on Intelligent Transportation Systems
Reads 0
Downloads 0

Machine vision based vehicle counting and traffic flow estimation are challenging problems especially for dense traffic scenarios. Previous line of interest (LOI) counting methods rarely focus on dense scenarios and their performance largely relies on the accuracy of tracking. Avoiding the use of complex tracking methods, an LOI counting framework is proposed to address the bi-directional LOI counting problem in dense scenarios. There are three main contributions. Firstly, instead of treating the LOI vehicle counting problem as a combination of detecting and tracking of individual vehicles, the bi-directional traffic flow is taken as a whole and a novel spatio-temporal counting feature (STCF) is proposed for extracting bi-directional traffic flow features in dense traffic scenarios. Secondly, without relying on a multi-target tracking process for tracking and counting each vehicle, a counting network is proposed, called the counting Long Short-Term Memory (cLSTM) network, to do analysis of the bi-directional STCF features and vehicle counting in successive video frames. Lastly, an estimation model is designed for estimating traffic flow parameters including speed, volume and density. Experiments performed on the UA-DETRAC dataset and the captured videos show that the proposed vehicle counting method outperforms the tested representative LOI counting methods in both accuracy and speed, and that the proposed framework can efficiently estimate traffic flow parameters including speed, volume and density in real time.

ACS Style

Shuang Li; Faliang Chang; Chunsheng Liu. Bi-Directional Dense Traffic Counting Based on Spatio-Temporal Counting Feature and Counting-LSTM Network. IEEE Transactions on Intelligent Transportation Systems 2020, 1 -13.

AMA Style

Shuang Li, Faliang Chang, Chunsheng Liu. Bi-Directional Dense Traffic Counting Based on Spatio-Temporal Counting Feature and Counting-LSTM Network. IEEE Transactions on Intelligent Transportation Systems. 2020; (99):1-13.

Chicago/Turabian Style

Shuang Li; Faliang Chang; Chunsheng Liu. 2020. "Bi-Directional Dense Traffic Counting Based on Spatio-Temporal Counting Feature and Counting-LSTM Network." IEEE Transactions on Intelligent Transportation Systems , no. 99: 1-13.

Journal article
Published: 29 May 2020 in Remote Sensing
Reads 0
Downloads 0

With the advantage of high maneuverability, Unmanned Aerial Vehicles (UAVs) have been widely deployed in vehicle monitoring and controlling. However, processing the images captured by UAV for the extracting vehicle information is hindered by some challenges including arbitrary orientations, huge scale variations and partial occlusion. In seeking to address these challenges, we propose a novel Multi-Scale and Occlusion Aware Network (MSOA-Net) for UAV based vehicle segmentation, which consists of two parts including a Multi-Scale Feature Adaptive Fusion Network (MSFAF-Net) and a Regional Attention based Triple Head Network (RATH-Net). In MSFAF-Net, a self-adaptive feature fusion module is proposed, which can adaptively aggregate hierarchical feature maps from multiple levels to help Feature Pyramid Network (FPN) deal with the scale change of vehicles. The RATH-Net with a self-attention mechanism is proposed to guide the location-sensitive sub-networks to enhance the vehicle of interest and suppress background noise caused by occlusions. In this study, we release a large comprehensive UAV based vehicle segmentation dataset (UVSD), which is the first public dataset for UAV based vehicle detection and segmentation. Experiments are conducted on the challenging UVSD dataset. Experimental results show that the proposed method is efficient in detecting and segmenting vehicles, and outperforms the compared state-of-the-art works.

ACS Style

Wang Zhang; Chunsheng Liu; Faliang Chang; Ye Song. Multi-Scale and Occlusion Aware Network for Vehicle Detection and Segmentation on UAV Aerial Images. Remote Sensing 2020, 12, 1760 .

AMA Style

Wang Zhang, Chunsheng Liu, Faliang Chang, Ye Song. Multi-Scale and Occlusion Aware Network for Vehicle Detection and Segmentation on UAV Aerial Images. Remote Sensing. 2020; 12 (11):1760.

Chicago/Turabian Style

Wang Zhang; Chunsheng Liu; Faliang Chang; Ye Song. 2020. "Multi-Scale and Occlusion Aware Network for Vehicle Detection and Segmentation on UAV Aerial Images." Remote Sensing 12, no. 11: 1760.

Journal article
Published: 02 April 2020 in IEEE Transactions on Multimedia
Reads 0
Downloads 0

Time-efficient anomaly detection and localization in video surveillance still remains challenging due to the complexity of "anomaly". In this paper, we propose a cuboid-patch-based method characterized by a cascade of classifiers called a spatialtemporal cascade autoencoder (ST-CaAE), which makes full use of both spatial and temporal cues from video data. The ST-CaAE has two main stages, defined by two proposed neural networks: a spatial-temporal adversarial autoencoder (ST-AAE) and a spatial-temporal convolutional autoencoder (ST-CAE). First, the ST-AAE is used to preliminarily identify anomalous video cuboids and exclude normal cuboids. The key idea underlying ST-AAE is to obtain a Gaussian model to fit the distribution of the regular data. Then in the second stage, the ST-CAE classifies the specific abnormal patches in each anomalous cuboid with reconstruction error based strategy that takes advantage of the CAE and skip connection. A two-stream framework is utilized to fuse the appearance and motion cues to achieve more complete detection results, taking the gradient and optical flow cuboids as inputs for each stream. The proposed ST-CaAE is evaluated using three public datasets. The experimental results verify that our framework outperforms other state-of-the-art works.

ACS Style

Nanjun Li; Faliang Chang; Chunsheng Liu. Spatial-Temporal Cascade Autoencoder for Video Anomaly Detection in Crowded Scenes. IEEE Transactions on Multimedia 2020, 23, 203 -215.

AMA Style

Nanjun Li, Faliang Chang, Chunsheng Liu. Spatial-Temporal Cascade Autoencoder for Video Anomaly Detection in Crowded Scenes. IEEE Transactions on Multimedia. 2020; 23 (99):203-215.

Chicago/Turabian Style

Nanjun Li; Faliang Chang; Chunsheng Liu. 2020. "Spatial-Temporal Cascade Autoencoder for Video Anomaly Detection in Crowded Scenes." IEEE Transactions on Multimedia 23, no. 99: 203-215.

Journal article
Published: 26 June 2019 in IEEE Access
Reads 0
Downloads 0

Traffic signs recognition (TSR) is an important part for some Advanced Driver Assistance Systems (ADAS) and Auto Driving Systems (ADS). As the first key step of TSR, traffic sign detection (TSD) is a challenging problem because of different types, small sizes, complex driving scenes and occlusions, etc. In recent years, there have been a large number of TSD algorithms based on machine vision and pattern recognition. In this paper, a comprehensive review of the literature on TSD is presented. We divide the reviewed detection methods into five main categories: color based methods, shape based methods, color and shape based methods, machine learning based methods, and LIDAR based methods. The methods in each category are also classified into different subcategories for understanding and summarizing the mechanisms of different methods. For some reviewed methods that lack comparisons on public datasets, we reimplemented part of these methods for comparison. Experimental comparisons and analyses are presented on the reported performance and the performance of our reimplemented methods. Furthermore, future directions and recommendations of the TSD research are given to promote the development of TSD.

ACS Style

Chunsheng Liu; Shuang Li; Faliang Chang; Yinhai Wang. Machine Vision Based Traffic Sign Detection Methods: Review, Analyses and Perspectives. IEEE Access 2019, 7, 86578 -86596.

AMA Style

Chunsheng Liu, Shuang Li, Faliang Chang, Yinhai Wang. Machine Vision Based Traffic Sign Detection Methods: Review, Analyses and Perspectives. IEEE Access. 2019; 7 (99):86578-86596.

Chicago/Turabian Style

Chunsheng Liu; Shuang Li; Faliang Chang; Yinhai Wang. 2019. "Machine Vision Based Traffic Sign Detection Methods: Review, Analyses and Perspectives." IEEE Access 7, no. 99: 86578-86596.

Journal article
Published: 13 June 2019 in Sensors
Reads 0
Downloads 0

You Only Look Once (YOLO) deep network can detect objects quickly with high precision and has been successfully applied in many detection problems. The main shortcoming of YOLO network is that YOLO network usually cannot achieve high precision when dealing with small-size object detection in high resolution images. To overcome this problem, we propose an effective region proposal extraction method for YOLO network to constitute an entire detection structure named ACF-PR-YOLO, and take the cyclist detection problem to show our methods. Instead of directly using the generated region proposals for classification or regression like most region proposal methods do, we generate large-size potential regions containing objects for the following deep network. The proposed ACF-PR-YOLO structure includes three main parts. Firstly, a region proposal extraction method based on aggregated channel feature (ACF) is proposed, called ACF based region proposal (ACF-PR) method. In ACF-PR, ACF is firstly utilized to fast extract candidates and then a bounding boxes merging and extending method is designed to merge the bounding boxes into correct region proposals for the following YOLO net. Secondly, we design suitable YOLO net for fine detection in the region proposals generated by ACF-PR. Lastly, we design a post-processing step, in which the results of YOLO net are mapped into the original image outputting the detection and localization results. Experiments performed on the Tsinghua-Daimler Cyclist Benchmark with high resolution images and complex scenes show that the proposed method outperforms the other tested representative detection methods in average precision, and that it outperforms YOLOv3 by 13.69 % average precision and outperforms SSD by 25.27 % average precision.

ACS Style

Chunsheng Liu; Yu Guo; Shuang Li; Faliang Chang. ACF Based Region Proposal Extraction for YOLOv3 Network Towards High-Performance Cyclist Detection in High Resolution Images. Sensors 2019, 19, 2671 .

AMA Style

Chunsheng Liu, Yu Guo, Shuang Li, Faliang Chang. ACF Based Region Proposal Extraction for YOLOv3 Network Towards High-Performance Cyclist Detection in High Resolution Images. Sensors. 2019; 19 (12):2671.

Chicago/Turabian Style

Chunsheng Liu; Yu Guo; Shuang Li; Faliang Chang. 2019. "ACF Based Region Proposal Extraction for YOLOv3 Network Towards High-Performance Cyclist Detection in High Resolution Images." Sensors 19, no. 12: 2671.

Journal article
Published: 27 August 2018 in IEEE Transactions on Intelligent Transportation Systems
Reads 0
Downloads 0

Though license plate detection has been successfully applied in some commercial products, the detection of small and vague license plates in real applications is still an open problem. In this paper, we propose a novel hybrid cascade structure for fast detecting small and vague license plates in large and complex visual surveillance scenes. For rapid license plate candidate extraction, we propose two cascade detectors, including the Cascaded Color Space Transformation of Pixel detector and the Cascaded Contrast-Color Haar-like detector; these two cascade detectors can do coarse-to-fine detection in the front and in the middle of the hybrid cascade. In the end of the hybrid cascade, we propose a cascaded convolutional network structure (Cascaded ConvNet), including two detection-ConvNets and a calibration-ConvNet, which is designed to do fine detection. Through experiments with different evaluation data sets with many small and vague plates, we show that the proposed framework is able to rapidly detect license plates with different resolutions and different sizes in large and complex visual surveillance scenes.

ACS Style

Chunsheng Liu; Faliang Chang. Hybrid Cascade Structure for License Plate Detection in Large Visual Surveillance Scenes. IEEE Transactions on Intelligent Transportation Systems 2018, 20, 2122 -2135.

AMA Style

Chunsheng Liu, Faliang Chang. Hybrid Cascade Structure for License Plate Detection in Large Visual Surveillance Scenes. IEEE Transactions on Intelligent Transportation Systems. 2018; 20 (6):2122-2135.

Chicago/Turabian Style

Chunsheng Liu; Faliang Chang. 2018. "Hybrid Cascade Structure for License Plate Detection in Large Visual Surveillance Scenes." IEEE Transactions on Intelligent Transportation Systems 20, no. 6: 2122-2135.

Journal article
Published: 22 July 2018 in Sensors
Reads 0
Downloads 0

With rapid calculation speed and relatively high accuracy, the AdaBoost-based detection framework has been successfully applied in some real applications of machine vision-based intelligent systems. The main shortcoming of the AdaBoost-based detection framework is that the off-line trained detector cannot be transfer retrained to adapt to unknown application scenes. In this paper, a new transfer learning structure based on two novel methods of supplemental boosting and cascaded ConvNet is proposed to address this shortcoming. The supplemental boosting method is proposed to supplementally retrain an AdaBoost-based detector for the purpose of transferring a detector to adapt to unknown application scenes. The cascaded ConvNet is designed and attached to the end of the AdaBoost-based detector for improving the detection rate and collecting supplemental training samples. With the added supplemental training samples provided by the cascaded ConvNet, the AdaBoost-based detector can be retrained with the supplemental boosting method. The detector combined with the retrained boosted detector and cascaded ConvNet detector can achieve high accuracy and a short detection time. As a representative object detection problem in intelligent transportation systems, the traffic sign detection problem is chosen to show our method. Through experiments with the public datasets from different countries, we show that the proposed framework can quickly detect objects in unknown application scenes.

ACS Style

Chunsheng Liu; Shuang Li; Faliang Chang; Wenhui Dong. Supplemental Boosting and Cascaded ConvNet Based Transfer Learning Structure for Fast Traffic Sign Detection in Unknown Application Scenes. Sensors 2018, 18, 2386 .

AMA Style

Chunsheng Liu, Shuang Li, Faliang Chang, Wenhui Dong. Supplemental Boosting and Cascaded ConvNet Based Transfer Learning Structure for Fast Traffic Sign Detection in Unknown Application Scenes. Sensors. 2018; 18 (7):2386.

Chicago/Turabian Style

Chunsheng Liu; Shuang Li; Faliang Chang; Wenhui Dong. 2018. "Supplemental Boosting and Cascaded ConvNet Based Transfer Learning Structure for Fast Traffic Sign Detection in Unknown Application Scenes." Sensors 18, no. 7: 2386.

Proceedings article
Published: 01 October 2017 in 2017 Chinese Automation Congress (CAC)
Reads 0
Downloads 0

Different types of traffic signs has different colors and shapes located in uncontrolled traffic environments. The detection of different types of traffic signs is a difficult problem in pattern recognition and computer vision. In our study, a region of interest (ROI) extraction method is proposed to extract ROI using color contrast in local regions. We utilize the high contrast in local regions to extract ROIs for Chinese circular prohibition signs and triangular danger signs. Because most popular Chinese signs contain red limit signs and yellow prohibitive signs, this method is designed to extract traffic signs with these two colors. Previous color extraction methods largely rely on threshold in some color channels, which is not robust to color changes. This method utilizes the high contrast in local regions to extract ROIs and is robust to color changes. The experiments demonstrate the ROI extraction is robust to color variability and can save detection time.

ACS Style

Chunsheng Liu; Faliang Chang. Fast and robust region of interest extraction for Chinese road signs. 2017 Chinese Automation Congress (CAC) 2017, 2877 -2881.

AMA Style

Chunsheng Liu, Faliang Chang. Fast and robust region of interest extraction for Chinese road signs. 2017 Chinese Automation Congress (CAC). 2017; ():2877-2881.

Chicago/Turabian Style

Chunsheng Liu; Faliang Chang. 2017. "Fast and robust region of interest extraction for Chinese road signs." 2017 Chinese Automation Congress (CAC) , no. : 2877-2881.

Research article
Published: 01 June 2016 in IET Intelligent Transport Systems
Reads 0
Downloads 0

The high variability of sign appearance with partial occlusions in uncontrolled environments has made the detection of traffic signs a challenging problem in computer vision. In this study, an occlusion-robust traffic sign detection framework is proposed. To achieve occlusion-robust detection, a colour cubic feature called colour cubic local binary pattern (CC-LBP) is proposed to construct a coarse-to-fine cascaded detector. The CC-LBP utilises colour information and a self-adaptive threshold to express multiclass traffic signs, which can effectively remove non-object subwindows in the cascade-based detection. The verification experiments show that the proposed CC-LBP feature performs better than the previous rectangular features in representing multiclass traffic signs, and that the proposed occlusion-robust detection method can detect multiclass partial occluded traffic signs with high accuracy in real time.

ACS Style

Chunsheng Liu; Faliang Chang; Chenyun Liu. Occlusion‐robust traffic sign detection via cascaded colour cubic feature. IET Intelligent Transport Systems 2016, 10, 354 -360.

AMA Style

Chunsheng Liu, Faliang Chang, Chenyun Liu. Occlusion‐robust traffic sign detection via cascaded colour cubic feature. IET Intelligent Transport Systems. 2016; 10 (5):354-360.

Chicago/Turabian Style

Chunsheng Liu; Faliang Chang; Chenyun Liu. 2016. "Occlusion‐robust traffic sign detection via cascaded colour cubic feature." IET Intelligent Transport Systems 10, no. 5: 354-360.

Image and vision processing and display technology
Published: 01 December 2015 in Electronics Letters
Reads 0
Downloads 0

Rectangular features have been widely used in different cascade-based object detection areas. Previous rectangular features contain no relationship of different colour channels, which makes the previous rectangular features weak in expressing objects with colour contrast. To overcome this shortcoming, a series of rectangular features called split-level colour Haar-like (SC-Haar-like) features are proposed. There are different colour channels in the SC-Haar-like features, and the different colour channels have close relationship with each other based on the split-level structure. The experiments on face detection show that the SC-Haar-like features outperform previous rectangular features in terms of detection rate and false alarm rate.

ACS Style

Chunsheng Liu; Faliang Chang; Chengyun Liu. Cascaded split‐level colour Haar‐like features for object detection. Electronics Letters 2015, 51, 2106 -2107.

AMA Style

Chunsheng Liu, Faliang Chang, Chengyun Liu. Cascaded split‐level colour Haar‐like features for object detection. Electronics Letters. 2015; 51 (25):2106-2107.

Chicago/Turabian Style

Chunsheng Liu; Faliang Chang; Chengyun Liu. 2015. "Cascaded split‐level colour Haar‐like features for object detection." Electronics Letters 51, no. 25: 2106-2107.

Journal article
Published: 03 August 2015 in IEEE Transactions on Intelligent Transportation Systems
Reads 0
Downloads 0

In this paper, we propose a high-performance traffic sign recognition (TSR) framework to rapidly detect and recognize multiclass traffic signs in high-resolution images. This framework includes three parts: a novel region-of-interest (ROI) extraction method called the high-contrast region extraction (HCRE), the split-flow cascade tree detector (SFC-tree detector), and a rapid occlusion-robust traffic sign classification method based on the extended sparse representation classification (ESRC). Unlike the color-thresholding or extreme region extraction methods used by previous ROI methods, the ROI extraction method of the HCRE is designed to extract ROI with high local contrast, which can keep a good balance of the detection rate and the extraction rate. The SFC-tree detector can detect a large number of different types of traffic signs in high-resolution images quickly. The traffic sign classification method based on the ESRC is designed to classify traffic signs with partial occlusion. Instead of solving the sparse representation problem using an overcomplete dictionary, the classification method based on the ESRC utilizes a content dictionary and an occlusion dictionary to sparsely represent traffic signs, which can largely reduce the dictionary size in the occlusion-robust dictionaries and achieve high accuracy. The experiments demonstrate the advantage of the proposed approach, and our TSR framework can rapidly detect and recognize multiclass traffic signs with high accuracy.

ACS Style

Chunsheng Liu; Faliang Chang; Zhenxue Chen; Dongmei Liu; Liu C.; Chang F.; Chen Z.; Liu D.. Fast Traffic Sign Recognition via High-Contrast Region Extraction and Extended Sparse Representation. IEEE Transactions on Intelligent Transportation Systems 2015, 17, 79 -92.

AMA Style

Chunsheng Liu, Faliang Chang, Zhenxue Chen, Dongmei Liu, Liu C., Chang F., Chen Z., Liu D.. Fast Traffic Sign Recognition via High-Contrast Region Extraction and Extended Sparse Representation. IEEE Transactions on Intelligent Transportation Systems. 2015; 17 (1):79-92.

Chicago/Turabian Style

Chunsheng Liu; Faliang Chang; Zhenxue Chen; Dongmei Liu; Liu C.; Chang F.; Chen Z.; Liu D.. 2015. "Fast Traffic Sign Recognition via High-Contrast Region Extraction and Extended Sparse Representation." IEEE Transactions on Intelligent Transportation Systems 17, no. 1: 79-92.

Conference paper
Published: 01 August 2014 in 2014 10th International Conference on Natural Computation (ICNC)
Reads 0
Downloads 0

The high variability of sign appearance in uncontrolled environments has made the detection and classification of road signs a challenging problem in computer vision. In this paper, an occlusion-robust traffic sign recognition method is proposed. To achieve occlusion-robust detection, we design a cascaded tree detector based on the MN-LBP features and a cascaded tree. For occlusion-robust traffic sign classification, the occlusion-robust dictionaries for sparse representation of multiclass traffic signs are designed. Then, the results of sparse representation are classified with SVM method. The classification results of SVM are more robust than that of the sparse representation classification (SRC) which directly uses judgment. The experiments on test set show that the proposed method is more robust and accurate to detect signs with partial occlusion than the methods based on SVM or SRC.

ACS Style

Chunsheng Liu; Faliang Chang; Zhenxue Chen. High performance traffic sign recognition based on sparse representation and SVM classification. 2014 10th International Conference on Natural Computation (ICNC) 2014, 108 -112.

AMA Style

Chunsheng Liu, Faliang Chang, Zhenxue Chen. High performance traffic sign recognition based on sparse representation and SVM classification. 2014 10th International Conference on Natural Computation (ICNC). 2014; ():108-112.

Chicago/Turabian Style

Chunsheng Liu; Faliang Chang; Zhenxue Chen. 2014. "High performance traffic sign recognition based on sparse representation and SVM classification." 2014 10th International Conference on Natural Computation (ICNC) , no. : 108-112.

Journal article
Published: 08 May 2014 in IEEE Transactions on Intelligent Transportation Systems
Reads 0
Downloads 0

This paper describes a traffic sign detection (TSD) framework that is capable of rapidly detecting multiclass traffic signs in high-resolution images while achieving a high detection rate. There are three key contributions. The first is the introduction of two features called multiblock normalization local binary pattern (MN-LBP) and tilted MN-LBP (TMN-LBP), which are able to express multiclass traffic signs effectively. The second is a tree structure called split-flow cascade, which utilizes common features of multiclass traffic signs to construct a coarse-to-fine TSD detector. The third contribution is the Common-Finder AdaBoost (CF.AdaBoost) algorithm, which is designed to find common features of different training sets to develop an efficient Split-Flow Cascade tree (SFC-tree) for multiclass TSD. Through experiments with an evaluation data set of high-resolution images, we show that the proposed framework is able to detect multiclass traffic signs with high detection accuracy in real time and that it outperforms the state-of-the-art approaches at detecting a large number of different types of traffic signs rapidly without using any color information.

ACS Style

Chunsheng Liu; Faliang Chang; Zhenxue Chen. Rapid Multiclass Traffic Sign Detection in High-Resolution Images. IEEE Transactions on Intelligent Transportation Systems 2014, 15, 2394 -2403.

AMA Style

Chunsheng Liu, Faliang Chang, Zhenxue Chen. Rapid Multiclass Traffic Sign Detection in High-Resolution Images. IEEE Transactions on Intelligent Transportation Systems. 2014; 15 (6):2394-2403.

Chicago/Turabian Style

Chunsheng Liu; Faliang Chang; Zhenxue Chen. 2014. "Rapid Multiclass Traffic Sign Detection in High-Resolution Images." IEEE Transactions on Intelligent Transportation Systems 15, no. 6: 2394-2403.