This page has only limited features, please log in for full access.

Unclaimed
Ke Sun
Shenzhen Key Laboratory of Spatial Smart Sensing and Services & Key Laboratory for Geo-Environmental Monitoring of Coastal Zone of the National Administration of Surveying, Mapping and GeoInformation, Shenzhen University, Shenzhen 518060, China

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 03 January 2019 in Remote Sensing
Reads 0
Downloads 0

This paper presents a novel indoor topological localization method based on mobile phone videos. Conventional methods suffer from indoor dynamic environmental changes and scene ambiguity. The proposed Visual Landmark Sequence-based Indoor Localization (VLSIL) method is capable of addressing problems by taking steady indoor objects as landmarks. Unlike many feature or appearance matching-based localization methods, our method utilizes highly abstracted landmark sematic information to represent locations and thus is invariant to illumination changes, temporal variations, and occlusions. We match consistently detected landmarks against the topological map based on the occurrence order in the videos. The proposed approach contains two components: a convolutional neural network (CNN)-based landmark detector and a topological matching algorithm. The proposed detector is capable of reliably and accurately detecting landmarks. The other part is the matching algorithm built on the second order hidden Markov model and it can successfully handle the environmental ambiguity by fusing sematic and connectivity information of landmarks. To evaluate the method, we conduct extensive experiments on the real world dataset collected in two indoor environments, and the results show that our deep neural network-based indoor landmark detector accurately detects all landmarks and is expected to be utilized in similar environments without retraining and that VLSIL can effectively localize indoor landmarks.

ACS Style

Jiasong Zhu; Qing Li; Rui Cao; Ke Sun; Tao Liu; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. Indoor Topological Localization Using a Visual Landmark Sequence. Remote Sensing 2019, 11, 73 .

AMA Style

Jiasong Zhu, Qing Li, Rui Cao, Ke Sun, Tao Liu, Jonathan M. Garibaldi, Qingquan Li, Bozhi Liu, Guoping Qiu. Indoor Topological Localization Using a Visual Landmark Sequence. Remote Sensing. 2019; 11 (1):73.

Chicago/Turabian Style

Jiasong Zhu; Qing Li; Rui Cao; Ke Sun; Tao Liu; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. 2019. "Indoor Topological Localization Using a Visual Landmark Sequence." Remote Sensing 11, no. 1: 73.

Journal article
Published: 15 November 2018 in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Reads 0
Downloads 0

This paper presents an advanced urban traffic density estimation solution using the latest deep learning techniques to intelligently process ultrahigh-resolution traffic videos taken from an unmanned aerial vehicle (UAV). We first capture nearly an hour-long ultrahigh-resolution traffic video at five busy road intersections of a modern megacity by flying a UAV during the rush hours. We then randomly sampled over 17 K 512 × 512 pixel image patches from the video frames and manually annotated over 64 K vehicles to form a dataset for this paper, which will also be made available to the research community for research purposes. Our innovative urban traffics analysis solution consists of an advanced deep neural network (DNN) based vehicle detection and localization, type (car, bus, and truck) recognition, tracking, and vehicle counting over time. We will present extensive experimental results to demonstrate the effectiveness of our solution. We will show that our enhanced single shot multibox detector (Enhanced-SSD) outperforms other DNN-based techniques and that deep learning techniques are more effective than traditional computer vision techniques in traffic video analysis. We will also show that ultrahigh-resolution video provides more information that enables more accurate vehicle detection and recognition than lower resolution contents. This paper not only demonstrates the advantages of using the latest technological advancements (ultrahigh-resolution video and UAV), but also provides an advanced DNN-based solution for exploiting these technological advancements for urban traffic density estimation.

ACS Style

Jiasong Zhu; Ke Sun; Sen Jia; Qingquan Li; Xianxu Hou; Weidong Lin; Bozhi Liu; Guoping Qiu. Urban Traffic Density Estimation Based on Ultrahigh-Resolution UAV Video and Deep Neural Network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2018, 11, 4968 -4981.

AMA Style

Jiasong Zhu, Ke Sun, Sen Jia, Qingquan Li, Xianxu Hou, Weidong Lin, Bozhi Liu, Guoping Qiu. Urban Traffic Density Estimation Based on Ultrahigh-Resolution UAV Video and Deep Neural Network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2018; 11 (12):4968-4981.

Chicago/Turabian Style

Jiasong Zhu; Ke Sun; Sen Jia; Qingquan Li; Xianxu Hou; Weidong Lin; Bozhi Liu; Guoping Qiu. 2018. "Urban Traffic Density Estimation Based on Ultrahigh-Resolution UAV Video and Deep Neural Network." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11, no. 12: 4968-4981.

Journal article
Published: 06 June 2018 in Remote Sensing
Reads 0
Downloads 0

Vehicle behavior recognition is an attractive research field which is useful for many computer vision and intelligent traffic analysis tasks. This paper presents an all-in-one behavior recognition framework for moving vehicles based on the latest deep learning techniques. Unlike traditional traffic analysis methods which rely on low-resolution videos captured by road cameras, we capture 4K ( 3840×2178 ) traffic videos at a busy road intersection of a modern megacity by flying a unmanned aerial vehicle (UAV) during the rush hours. We then manually annotate locations and types of road vehicles. The proposed method consists of the following three steps: (1) vehicle detection and type recognition based on deep neural networks; (2) vehicle tracking by data association and vehicle trajectory modeling; (3) vehicle behavior recognition by nearest neighbor search and by bidirectional long short-term memory network, respectively. This paper also presents experimental results of the proposed framework in comparison with state-of-the-art approaches on the 4K testing traffic video, which demonstrated the effectiveness and superiority of the proposed method.

ACS Style

Jiasong Zhu; Ke Sun; Sen Jia; Weidong Lin; Xianxu Hou; Bozhi Liu; Guoping Qiu. Bidirectional Long Short-Term Memory Network for Vehicle Behavior Recognition. Remote Sensing 2018, 10, 887 .

AMA Style

Jiasong Zhu, Ke Sun, Sen Jia, Weidong Lin, Xianxu Hou, Bozhi Liu, Guoping Qiu. Bidirectional Long Short-Term Memory Network for Vehicle Behavior Recognition. Remote Sensing. 2018; 10 (6):887.

Chicago/Turabian Style

Jiasong Zhu; Ke Sun; Sen Jia; Weidong Lin; Xianxu Hou; Bozhi Liu; Guoping Qiu. 2018. "Bidirectional Long Short-Term Memory Network for Vehicle Behavior Recognition." Remote Sensing 10, no. 6: 887.