This page has only limited features, please log in for full access.

Dr. Zhenyu Li
Tongji University

Basic Info

Basic Info is private.

Research Keywords & Expertise

0 Deep Learning
0 Robot
0 SLAM
0 Sensor Applications
0 Machine and Deep Learning

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Regular paper
Published: 03 June 2021 in Journal of Intelligent & Robotic Systems
Reads 0
Downloads 0

Visual scene recognition is an indispensable part of automatic localization and navigation. In the same scene, the appearance and viewpoint may be changed greatly, which is the largest challenge for some advanced unmanned systems,e.g. robot,vehicle and UAV,etc., to identify scenes where they have visited. Traditional methods have been subjected to hand-made feature-based paradigms for a long time, mainly relying on the prior knowledge of the designer, and are not sufficiently robust to extreme changing scenes. In this paper, we cope with scene recognition with automatically learning the representation of features from big image samples. Firstly, we propose a novel approach for scene recognition via training a slight-weight convolutional neural network (CNN) that overall has less complex and more efficient network architecture, and is trainable in the manner of end-to-end. The proposed approach uses the deep-leaning features of self-selection combining with light CNN process to perform high semantic understanding of visual scenes. Secondly, we propose to employ a salient region-based technology to extract the local feature representation of a specific scene region directly from the convolution layer based on self-selection mechanism, and each layer performs a linear operation with end-to-end manner. Furthermore, we also utilize probability statistics to calculate the total similarity of several regions in one scene to other regions, and finally rank the similarity scores to select the correct scene. We have conducted a lot of experiments to evaluate the results of performance by comparing four methods (namely, our proposed and other three well known and advanced methods). Experimental results show that the proposed method is more robust and accurate than other three well-known methods in extremely harsh environments (e. g. weak light and strong blur).

ACS Style

Zhenyu Li; Aiguo Zhou. Self-Selection Salient Region-Based Scene Recognition Using Slight-Weight Convolutional Neural Network. Journal of Intelligent & Robotic Systems 2021, 102, 1 -16.

AMA Style

Zhenyu Li, Aiguo Zhou. Self-Selection Salient Region-Based Scene Recognition Using Slight-Weight Convolutional Neural Network. Journal of Intelligent & Robotic Systems. 2021; 102 (3):1-16.

Chicago/Turabian Style

Zhenyu Li; Aiguo Zhou. 2021. "Self-Selection Salient Region-Based Scene Recognition Using Slight-Weight Convolutional Neural Network." Journal of Intelligent & Robotic Systems 102, no. 3: 1-16.

Journal article
Published: 11 March 2020 in Sensors
Reads 0
Downloads 0

Scene recognition is an essential part in the vision-based robot navigation domain. The successful application of deep learning technology has triggered more extensive preliminary studies on scene recognition, which all use extracted features from networks that are trained for recognition tasks. In the paper, we interpret scene recognition as a region-based image retrieval problem and present a novel approach for scene recognition with an end-to-end trainable Multi-column convolutional neural network (MCNN) architecture. The proposed MCNN utilizes filters with receptive fields of different sizes to have Multi-level and Multi-layer image perception, and consists of three components: front-end, middle-end and back-end. The first seven layers VGG16 are taken as front-end for two-dimensional feature extraction, Inception-A is taken as the middle-end for deeper learning feature representation, and Large-Margin Softmax Loss (L-Softmax) is taken as the back-end for enhancing intra-class compactness and inter-class-separability. Extensive experiments have been conducted to evaluate the performance according to compare our proposed network to existing state-of-the-art methods. Experimental results on three popular datasets demonstrate the robustness and accuracy of our approach. To the best of our knowledge, the presented approach has not been applied for the scene recognition in literature.

ACS Style

Zhenyu Li; Aiguo Zhou; Yong Shen. An End-to-End Trainable Multi-Column CNN for Scene Recognition in Extremely Changing Environment. Sensors 2020, 20, 1556 .

AMA Style

Zhenyu Li, Aiguo Zhou, Yong Shen. An End-to-End Trainable Multi-Column CNN for Scene Recognition in Extremely Changing Environment. Sensors. 2020; 20 (6):1556.

Chicago/Turabian Style

Zhenyu Li; Aiguo Zhou; Yong Shen. 2020. "An End-to-End Trainable Multi-Column CNN for Scene Recognition in Extremely Changing Environment." Sensors 20, no. 6: 1556.

Journal article
Published: 04 August 2019 in Applied Sciences
Reads 0
Downloads 0

The in-vehicle controller area network (CAN) bus is one of the essential components for autonomous vehicles, and its safety will be one of the greatest challenges in the field of intelligent vehicles in the future. In this paper, we propose a novel system that uses a deep neural network (DNN) to detect anomalous CAN bus messages. We treat anomaly detection as a cross-domain modelling problem, in which three CAN bus data packets as a group are directly imported into the DNN architecture for parallel training with shared weights. After that, three data packets are represented as three independent feature vectors, which corresponds to three different types of data sequences, namely anchor, positive and negative. The proposed DNN architecture is an embedded triplet loss network that optimizes the distance between the anchor example and the positive example, makes it smaller than the distance between the anchor example and the negative example, and realizes the similarity calculation of samples, which were originally used in face detection. Compared to traditional anomaly detection methods, the proposed method to learn the parameters with shared-weight could improve detection efficiency and detection accuracy. The whole detection system is composed of the front-end and the back-end, which correspond to deep network and triplet loss network, respectively, and are trainable in an end-to-end fashion. Experimental results demonstrate that the proposed technology can make real-time responses to anomalies and attacks to the CAN bus, and significantly improve the detection ratio. To the best of our knowledge, the proposed method is the first used for anomaly detection in the in-vehicle CAN bus.

ACS Style

Aiguo Zhou; Zhenyu Li; Yong Shen. Anomaly Detection of CAN Bus Messages Using A Deep Neural Network for Autonomous Vehicles. Applied Sciences 2019, 9, 3174 .

AMA Style

Aiguo Zhou, Zhenyu Li, Yong Shen. Anomaly Detection of CAN Bus Messages Using A Deep Neural Network for Autonomous Vehicles. Applied Sciences. 2019; 9 (15):3174.

Chicago/Turabian Style

Aiguo Zhou; Zhenyu Li; Yong Shen. 2019. "Anomaly Detection of CAN Bus Messages Using A Deep Neural Network for Autonomous Vehicles." Applied Sciences 9, no. 15: 3174.