This page has only limited features, please log in for full access.

Unclaimed
Haitao Yang

Basic Info

Basic Info is private.

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 05 June 2019 in Symmetry
Reads 0
Downloads 0

In this paper, our goal is to improve the recognition accuracy of battlefield target aggregation behavior while maintaining the low computational cost of spatio-temporal depth neural networks. To this end, we propose a novel 3D-CNN (3D Convolutional Neural Networks) model, which extends the idea of multi-scale feature fusion to the spatio-temporal domain, and enhances the feature extraction ability of the network by combining feature maps of different convolutional layers. In order to reduce the computational complexity of the network, we further improved the multi-fiber network, and finally established an architecture—3D convolution Two-Stream model based on multi-scale feature fusion. Extensive experimental results on the simulation data show that our network significantly boosts the efficiency of existing convolutional neural networks in the aggregation behavior recognition, achieving the most advanced performance on the dataset constructed in this paper.

ACS Style

Haiyang Jiang; Yaozong Pan; Jian Zhang; Haitao Yang. Battlefield Target Aggregation Behavior Recognition Model Based on Multi-Scale Feature Fusion. Symmetry 2019, 11, 761 .

AMA Style

Haiyang Jiang, Yaozong Pan, Jian Zhang, Haitao Yang. Battlefield Target Aggregation Behavior Recognition Model Based on Multi-Scale Feature Fusion. Symmetry. 2019; 11 (6):761.

Chicago/Turabian Style

Haiyang Jiang; Yaozong Pan; Jian Zhang; Haitao Yang. 2019. "Battlefield Target Aggregation Behavior Recognition Model Based on Multi-Scale Feature Fusion." Symmetry 11, no. 6: 761.

Journal article
Published: 24 April 2019 in Symmetry
Reads 0
Downloads 0

Using expert samples to improve the performance of reinforcement learning (RL) algorithms has become one of the focuses of research nowadays. However, in different application scenarios, it is hard to guarantee both the quantity and quality of expert samples, which prohibits the practical application and performance of such algorithms. In this paper, a novel RL decision optimization method is proposed. The proposed method is capable of reducing the dependence on expert samples via incorporating the decision-making evaluation mechanism. By introducing supervised learning (SL), our method optimizes the decision making of the RL algorithm by using demonstrations or expert samples. Experiments are conducted in Pendulum and Puckworld scenarios to test the proposed method, and we use representative algorithms such as deep Q-network (DQN) and Double DQN (DDQN) as benchmarks. The results demonstrate that the method adopted in this paper can effectively improve the decision-making performance of agents even when the expert samples are not available.

ACS Style

Yaozong Pan; Jian Zhang; Chunhui Yuan; Haitao Yang. Supervised Reinforcement Learning via Value Function. Symmetry 2019, 11, 590 .

AMA Style

Yaozong Pan, Jian Zhang, Chunhui Yuan, Haitao Yang. Supervised Reinforcement Learning via Value Function. Symmetry. 2019; 11 (4):590.

Chicago/Turabian Style

Yaozong Pan; Jian Zhang; Chunhui Yuan; Haitao Yang. 2019. "Supervised Reinforcement Learning via Value Function." Symmetry 11, no. 4: 590.