This page has only limited features, please log in for full access.
Precise and drift-free motion estimation is an essential technology for autonomous driving. Single-sensor methods such as laser-based or vision-based have proven to be inadequate. To solve the problem, we proposed an optimization-based fusion approach that incorporates information from complementary sensors to achieve high accuracy and global drift-free. The core idea is to construct a globally unified pose graph through a dual-layer optimization strategy. The local estimation layer obtains the relative pose through LiDAR odometry and visual-inertial odometry. Subsequently, by introducing the absolute geographic position information of GPS, the accumulated drifts are corrected in the global optimization layer. The performance of our approach has been evaluated both in real-world environments and public datasets. The result demonstrates that our approach outperforms other state-of-the-art algorithms, with an average translation error of 0.8045% and an average rotation error of 0.0043deg/m.
Ke Wang; Chuan Cao; Sai Ma; Fan Ren. An Optimization-based Multi-Sensor Fusion Approach Towards Global Drift-Free Motion Estimation. IEEE Sensors Journal 2021, PP, 1 -1.
AMA StyleKe Wang, Chuan Cao, Sai Ma, Fan Ren. An Optimization-based Multi-Sensor Fusion Approach Towards Global Drift-Free Motion Estimation. IEEE Sensors Journal. 2021; PP (99):1-1.
Chicago/Turabian StyleKe Wang; Chuan Cao; Sai Ma; Fan Ren. 2021. "An Optimization-based Multi-Sensor Fusion Approach Towards Global Drift-Free Motion Estimation." IEEE Sensors Journal PP, no. 99: 1-1.
Visual odometry (VO) is a prevalent way to deal with the relative localization problem, which is becoming increasingly mature and accurate, but it tends to be fragile under challenging environments. Comparing with classical geometry-based methods, deep learning-based methods can automatically learn effective and robust representations, such as depth, optical flow, feature, ego-motion, etc., from data without explicit computation. Nevertheless, there still lacks a thorough review of the recent advances of deep learning-based VO (Deep VO). Therefore, this paper aims to gain a deep insight on how deep learning can profit and optimize the VO systems. We first screen out a number of qualifications including accuracy, efficiency, scalability, dynamicity, practicability, and extensibility, and employ them as the criteria. Then, using the offered criteria as the uniform measurements, we detailedly evaluate and discuss how deep learning improves the performance of VO from the aspects of depth estimation, feature extraction and matching, pose estimation. We also summarize the complicated and emerging areas of Deep VO, such as mobile robots, medical robots, augmented and virtual reality, etc. Through the literature decomposition, analysis, and comparison, we finally put forward a number of open issues and raise some future research directions in this field.
Ke Wang; Sai Ma; Junlan Chen; Fan Ren; Jianbo Lu. Approaches Challenges and Applications for Deep Visual Odometry Toward to Complicated and Emerging Areas. IEEE Transactions on Cognitive and Developmental Systems 2020, PP, 1 -1.
AMA StyleKe Wang, Sai Ma, Junlan Chen, Fan Ren, Jianbo Lu. Approaches Challenges and Applications for Deep Visual Odometry Toward to Complicated and Emerging Areas. IEEE Transactions on Cognitive and Developmental Systems. 2020; PP (99):1-1.
Chicago/Turabian StyleKe Wang; Sai Ma; Junlan Chen; Fan Ren; Jianbo Lu. 2020. "Approaches Challenges and Applications for Deep Visual Odometry Toward to Complicated and Emerging Areas." IEEE Transactions on Cognitive and Developmental Systems PP, no. 99: 1-1.