This page has only limited features, please log in for full access.
For a rotating 2D lidar, the inaccurate matching between the 2D lidar and the motor is an important error resource of the 3D point cloud, where the error is shown both in shape and attitude. Existing methods need to measure the angle position of the motor shaft in real time to synchronize the 2D lidar data and the motor shaft angle. However, the sensor used for measurement is usually expensive, which can increase the cost. Therefore, we propose a low-cost method to calibrate the matching error between the 2D lidar and the motor, without using an angular sensor. First, the sequence between the motor and the 2D lidar is optimized to eliminate the shape error of the 3D point cloud. Next, we eliminate the attitude error with uncertainty of the 3D point cloud by installing a triangular plate on the prototype. Finally, the Levenberg–Marquardt method is used to calibrate the installation error of the triangular plate. Experiments verified that the accuracy of our method can meet the requirements of the 3D mapping of indoor autonomous mobile robots. While we use a 2D lidar Hokuyo UST-10LX with an accuracy of ±40 mm in our prototype, we can limit the mapping error within ±50 mm when the distance is no more than 2.2996 m for a 1 s scan (mode 1), and we can limit the mapping error within ±50 mm at the measuring range 10 m for a 16 s scan (mode 7). Our method can reduce the cost while the accuracy is ensured, which can make a rotating 2D lidar cheaper.
Chang Yuan; Shusheng Bi; Jun Cheng; Dongsheng Yang; Wei Wang. Low-Cost Calibration of Matching Error between Lidar and Motor for a Rotating 2D Lidar. Applied Sciences 2021, 11, 913 .
AMA StyleChang Yuan, Shusheng Bi, Jun Cheng, Dongsheng Yang, Wei Wang. Low-Cost Calibration of Matching Error between Lidar and Motor for a Rotating 2D Lidar. Applied Sciences. 2021; 11 (3):913.
Chicago/Turabian StyleChang Yuan; Shusheng Bi; Jun Cheng; Dongsheng Yang; Wei Wang. 2021. "Low-Cost Calibration of Matching Error between Lidar and Motor for a Rotating 2D Lidar." Applied Sciences 11, no. 3: 913.
Occupied grid maps are sufficient for mobile robots to complete metric navigation tasks in domestic environments. However, they lack semantic information to endow the robots with the ability of social goal selection and human-friendly operation modes. In this paper, we propose an object semantic grid mapping system with 2D Light Detection and Ranging (LiDAR) and RGB-D sensors to solve this problem. At first, we use a laser-based Simultaneous Localization and Mapping (SLAM) to generate an occupied grid map and obtain a robot trajectory. Then, we employ object detection to get an object’s semantics of color images and use joint interpolation to refine camera poses. Based on object detection, depth images, and interpolated poses, we build a point cloud with object instances. To generate object-oriented minimum bounding rectangles, we propose a method for extracting the dominant directions of the room. Furthermore, we build object goal spaces to help the robots select navigation goals conveniently and socially. We have used the [email protected] dataset to verify the system; the verification results show that our system is effective.
Xianyu Qi; Wei Wang; Ziwei Liao; Xiaoyu Zhang; Dongsheng Yang; Ran Wei. Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation. Applied Sciences 2020, 10, 5782 .
AMA StyleXianyu Qi, Wei Wang, Ziwei Liao, Xiaoyu Zhang, Dongsheng Yang, Ran Wei. Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation. Applied Sciences. 2020; 10 (17):5782.
Chicago/Turabian StyleXianyu Qi; Wei Wang; Ziwei Liao; Xiaoyu Zhang; Dongsheng Yang; Ran Wei. 2020. "Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation." Applied Sciences 10, no. 17: 5782.
The state-of-the-art visual simultaneous localization and mapping (V-SLAM) systems have high accuracy localization capabilities and impressive mapping effects. However, most of these systems assume that the operating environment is static, thereby limiting their application in the real dynamic world. In this paper, by fusing the information of an RGB-D camera and two encoders that are mounted on a differential-drive robot, we aim to estimate the motion of the robot and construct a static background OctoMap in both dynamic and static environments. A tightly coupled feature-based method is proposed to fuse the two types of information based on the optimization. Dynamic pixels occupied by dynamic objects are detected and culled to cope with dynamic environments. The ability to identify the dynamic pixels on both predefined and undefined dynamic objects is available, which is attributed to the combination of the CPU-based object detection method and a multiview constraint-based approach. We first construct local sub-OctoMaps by using the keyframes and then fuse the sub-OctoMaps into a full OctoMap. This submap-based approach gives the OctoMap the ability to deform, and significantly reduces the map updating time and memory costs. We evaluated the proposed system in various dynamic and static scenes. The results show that our system possesses competitive pose accuracy and high robustness, as well as the ability to construct a clean static OctoMap in dynamic scenes.
Dongsheng Yang; Shusheng Bi; Wei Wang; Chang Yuan; Xianyu Qi; Yueri Cai. DRE-SLAM: Dynamic RGB-D Encoder SLAM for a Differential-Drive Robot. Remote Sensing 2019, 11, 380 .
AMA StyleDongsheng Yang, Shusheng Bi, Wei Wang, Chang Yuan, Xianyu Qi, Yueri Cai. DRE-SLAM: Dynamic RGB-D Encoder SLAM for a Differential-Drive Robot. Remote Sensing. 2019; 11 (4):380.
Chicago/Turabian StyleDongsheng Yang; Shusheng Bi; Wei Wang; Chang Yuan; Xianyu Qi; Yueri Cai. 2019. "DRE-SLAM: Dynamic RGB-D Encoder SLAM for a Differential-Drive Robot." Remote Sensing 11, no. 4: 380.
This paper simultaneously calibrates odometry parameters and the relative pose between a monocular camera and a robot automatically. Most camera pose estimation methods use natural features or artificial landmark tools. However, there are mismatches and scale ambiguity for natural features; the large-scale precision landmark tool is also challenging to make. To solve these problems, we propose an automatic process to combine multiple composite targets, select keyframes, and estimate keyframe poses. The composite target consists of an aruco marker and a checkerboard pattern. First, an analytical method is applied to obtain initial values of all calibration parameters; prior knowledge of the calibration parameters is not required. Then, two optimization steps are used to refine the calibration parameters. Planar motion constraints of the camera are introduced in these optimizations. The proposed solution is automatic; manual selection of keyframes, initial values, and robot construction within a specific trajectory are not required. The competing accuracy and stability of the proposed method under different target placements and robot paths are tested experimentally. Positive effects on calibration accuracy and stability are obtained when (1) composite targets are adopted; (2) two optimization steps are used; (3) plane motion constraints are introduced; and (4) target numbers are increased.
Shusheng Bi; Dongsheng Yang; Yueri Cai. Automatic Calibration of Odometry and Robot Extrinsic Parameters Using Multi-Composite-Targets for a Differential-Drive Robot with a Camera. Sensors 2018, 18, 3097 .
AMA StyleShusheng Bi, Dongsheng Yang, Yueri Cai. Automatic Calibration of Odometry and Robot Extrinsic Parameters Using Multi-Composite-Targets for a Differential-Drive Robot with a Camera. Sensors. 2018; 18 (9):3097.
Chicago/Turabian StyleShusheng Bi; Dongsheng Yang; Yueri Cai. 2018. "Automatic Calibration of Odometry and Robot Extrinsic Parameters Using Multi-Composite-Targets for a Differential-Drive Robot with a Camera." Sensors 18, no. 9: 3097.
We present an approach to efficiently recognize human posture with a multi-classified support vector machine (SVM). In order to get features that input to the SVM, the approach use skeleton information obtained from a two-dimensional (2D) image and then map it into three-dimensional (3D) space using depth information. A body coordinate system is established to ensure the same postures have similar features. To deal with the problem of occlusion, we generate interpolating points using interpolation algorithm. Features contain both 3D information of the interpolating points and angles information related to joints and interpolating points. A dataset of five postures is built to verify the effectiveness of the approach. The results of experiments show that the recognition accuracy reaches 97.9% by the approach. Furthermore, the average time cost by extracting features and posture recognition with SVM after obtaining skeleton information is only 0.483 ms which meets the real-time application requirements.
Bo Cao; Shusheng Bi; Jingxiang Zheng; Dongsheng Yang. Human Posture Recognition Using Skeleton and Depth Information. 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA) 2018, 275 -280.
AMA StyleBo Cao, Shusheng Bi, Jingxiang Zheng, Dongsheng Yang. Human Posture Recognition Using Skeleton and Depth Information. 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA). 2018; ():275-280.
Chicago/Turabian StyleBo Cao; Shusheng Bi; Jingxiang Zheng; Dongsheng Yang. 2018. "Human Posture Recognition Using Skeleton and Depth Information." 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA) , no. : 275-280.
The aim of this paper is to estimate the ego-motion of an RGB-D camera in dynamic environments. A semi-direct motion estimation pipeline is modified for the RGB-D camera. In order to avoid the impact of dynamic objects, a new mapping method based on scoring mechanism is proposed, which can effectively remove feature points on dynamic objects and results a map contains only static points. The method is evaluated not only with the TUM RGB-D benchmark but also using an Asus Xtion Pro Live camera in a dynamic office environment. The experimental results show that our method has higher accuracy in dynamic environments and has considerable accuracy in static environments. In some high dynamic scenes, the accuracy of our method is more than 7 times higher than other RGB-D visual odometry algorithms.
Dongsheng Yang; Shusheng Bi; Yueri Cai; Jingxiang Zheng; Chang Yuan. Dynamic RGB-D visual odometry. 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2017, 1 .
AMA StyleDongsheng Yang, Shusheng Bi, Yueri Cai, Jingxiang Zheng, Chang Yuan. Dynamic RGB-D visual odometry. 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO). 2017; ():1.
Chicago/Turabian StyleDongsheng Yang; Shusheng Bi; Yueri Cai; Jingxiang Zheng; Chang Yuan. 2017. "Dynamic RGB-D visual odometry." 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO) , no. : 1.