This page has only limited features, please log in for full access.
Topographic products are important for mission operations and scientific research in lunar exploration. In a lunar rover mission, high-resolution digital elevation models are typically generated at waypoints by photogrammetry methods based on rover stereo images acquired by stereo cameras. In case stereo images are not available, the stereo-photogrammetric method will not be applicable. Alternatively, photometric stereo method can recover topographic information with pixel-level resolution from three or more images, which are acquired by one camera under the same viewing geometry with different illumination conditions. In this research, we extend the concept of photometric stereo to photogrammetric-photometric stereo by incorporating collinearity equations into imaging irradiance model. The proposed photogrammetric-photometric stereo algorithm for surface construction involves three steps. First, the terrain normal vector in object space is derived from collinearity equations, and image irradiance equation for close-range topographic mapping is determined. Second, based on image irradiance equations of multiple images, the height gradients in image space can be solved. Finally, the height map is reconstructed through global least-squares surface reconstruction with spectral regularization. Experiments were carried out using simulated lunar rover images and actual lunar rover images acquired by Yutu-2 rover of Chang’e-4 mission. The results indicate that the proposed method achieves high-resolution and high-precision surface reconstruction, and outperforms the traditional photometric stereo methods. The proposed method is valuable for ground-based lunar surface reconstruction and can be applicable to surface reconstruction of Earth and other planets.
Man Peng; Kaichang Di; Yexin Wang; Wenhui Wan; Zhaoqin Liu; Jia Wang; Lichun Li. A Photogrammetric-Photometric Stereo Method for High-Resolution Lunar Topographic Mapping Using Yutu-2 Rover Images. Remote Sensing 2021, 13, 2975 .
AMA StyleMan Peng, Kaichang Di, Yexin Wang, Wenhui Wan, Zhaoqin Liu, Jia Wang, Lichun Li. A Photogrammetric-Photometric Stereo Method for High-Resolution Lunar Topographic Mapping Using Yutu-2 Rover Images. Remote Sensing. 2021; 13 (15):2975.
Chicago/Turabian StyleMan Peng; Kaichang Di; Yexin Wang; Wenhui Wan; Zhaoqin Liu; Jia Wang; Lichun Li. 2021. "A Photogrammetric-Photometric Stereo Method for High-Resolution Lunar Topographic Mapping Using Yutu-2 Rover Images." Remote Sensing 13, no. 15: 2975.
Chang’e-5, China’s first unmanned lunar sample-return mission, was successfully landed in Northern Oceanus Procellarum on 1 December 2020. Determining the lander location precisely and timely is critical for both engineering operations and subsequent scientific research. Localization of the lander was performed using radio-tracking and image-based methods. The lander location was determined to be (51.92° W, 43.06° N) by both methods. Other localization results were compared for cross-validation. The localization results greatly contributed to the planning of the ascender lifting off from the lander and subsequent maneuvers, and they will contribute to scientific analysis of the returned samples and in situ acquired data.
Jia Wang; Yu Zhang; Kaichang Di; Ming Chen; Jianfeng Duan; Jing Kong; Jianfeng Xie; Zhaoqin Liu; Wenhui Wan; Zhifei Rong; Bin Liu; Man Peng; Yexin Wang. Localization of the Chang’e-5 Lander Using Radio-Tracking and Image-Based Methods. Remote Sensing 2021, 13, 590 .
AMA StyleJia Wang, Yu Zhang, Kaichang Di, Ming Chen, Jianfeng Duan, Jing Kong, Jianfeng Xie, Zhaoqin Liu, Wenhui Wan, Zhifei Rong, Bin Liu, Man Peng, Yexin Wang. Localization of the Chang’e-5 Lander Using Radio-Tracking and Image-Based Methods. Remote Sensing. 2021; 13 (4):590.
Chicago/Turabian StyleJia Wang; Yu Zhang; Kaichang Di; Ming Chen; Jianfeng Duan; Jing Kong; Jianfeng Xie; Zhaoqin Liu; Wenhui Wan; Zhifei Rong; Bin Liu; Man Peng; Yexin Wang. 2021. "Localization of the Chang’e-5 Lander Using Radio-Tracking and Image-Based Methods." Remote Sensing 13, no. 4: 590.
The visible and near-infrared imaging spectrometer (VNIS) carried by the Yutu-2 rover is mainly used for mineral composition studies of the lunar surface. It is installed in front of the rover with a fixed pitch angle and small field of view (FOV). Therefore, in the process of target detection, it is necessary to precisely control the rover to reach the designated position and point to the target. Hence, this report proposes a vision-guided control method for in-situ detection using the VNIS. During the first 17 lunar days after landing, Yutu-2 conducted five in-situ scientific explorations, demonstrating the effectiveness and feasibility of this method. In addition, a theoretical error analysis was performed on the prediction method of the FOV. On-board calibration was completed based on the analysis results of the multi-waypoint VNIS images, further refining the relevant parameters of the VNIS and effectively improving implementation efficiency for in-situ detection.
Jia Wang; Tianyi Yu; Kaichang Di; Sheng Gou; Man Peng; Wenhui Wan; Zhaoqin Liu; Lichun Li; Yexin Wang; Zhifei Rong; Ximing He; Yi You; Fan Wu; Qiaofang Zou; Xiaohui Liu. Control and on-Board Calibration Method for in-Situ Detection Using the Visible and Near-Infrared Imaging Spectrometer on the Yutu-2 Rover. Communications in Computer and Information Science 2020, 267 -281.
AMA StyleJia Wang, Tianyi Yu, Kaichang Di, Sheng Gou, Man Peng, Wenhui Wan, Zhaoqin Liu, Lichun Li, Yexin Wang, Zhifei Rong, Ximing He, Yi You, Fan Wu, Qiaofang Zou, Xiaohui Liu. Control and on-Board Calibration Method for in-Situ Detection Using the Visible and Near-Infrared Imaging Spectrometer on the Yutu-2 Rover. Communications in Computer and Information Science. 2020; ():267-281.
Chicago/Turabian StyleJia Wang; Tianyi Yu; Kaichang Di; Sheng Gou; Man Peng; Wenhui Wan; Zhaoqin Liu; Lichun Li; Yexin Wang; Zhifei Rong; Ximing He; Yi You; Fan Wu; Qiaofang Zou; Xiaohui Liu. 2020. "Control and on-Board Calibration Method for in-Situ Detection Using the Visible and Near-Infrared Imaging Spectrometer on the Yutu-2 Rover." Communications in Computer and Information Science , no. : 267-281.
In planetary rover missions, rover path planning is critical to ensure the safety and efficiency of the rover traverse and in situ explorations. In the Chang’e-4 (CE-4) mission, we have proposed and developed vision-based decision support methods comprising obstacle map generation and path evaluation for the rover. At each waypoint along the rover traverse, digital elevation model (DEM) is automatically generated, which is then used for obstacle map generation and path searching. For path evaluation, the searched path and the predicted wheel trajectories are projected onto the original images captured by different cameras to coincide with the real observation scenario. The proposed methods have been applied in the CE-4 mission to support teleoperation of the rover, and examples of multiple applications used in the mission are presented in this paper. Under the support of the proposed techniques, by the end of the 14th lunar day (3 February 2020), the rover has already travelled 367.25 meters on the far side of the Moon.
Yexin Wang; Wenhui Wan; Sheng Gou; Man Peng; Zhaoqin Liu; Kaichang Di; Lichun Li; Tianyi Yu; Jia Wang; Xiao Cheng. Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission. Remote Sensing 2020, 12, 624 .
AMA StyleYexin Wang, Wenhui Wan, Sheng Gou, Man Peng, Zhaoqin Liu, Kaichang Di, Lichun Li, Tianyi Yu, Jia Wang, Xiao Cheng. Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission. Remote Sensing. 2020; 12 (4):624.
Chicago/Turabian StyleYexin Wang; Wenhui Wan; Sheng Gou; Man Peng; Zhaoqin Liu; Kaichang Di; Lichun Li; Tianyi Yu; Jia Wang; Xiao Cheng. 2020. "Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission." Remote Sensing 12, no. 4: 624.
On December 8, 2018, China launched the Chang’E-4 lunar probe and implemented soft landing and patrol exploration on the lunar farside for the first time. Fast and high-precision positioning of the landing point is a critical step of the mission, and also an important prerequisite for surface operations of the lander and rover. Based on the high-precision image matching and geometric transformation methods, and considering the engineering requirements, the landing point location of the Chang’E-4 lander was initially determined using the near real-time high compression ratio descent sequence images. Then, the position was refined using the replayed low compression ratio descent images, and the landing point position was calculated to be (177.5884°E, 45.4565°S). The landing point localization method and result have been successfully applied to actual engineering tasks for the first time in China, thus providing important support to the topographic analysis of the landing area and mission planning of the follow-up teleoperations.
Jia Wang; Weiren Wu; Jian Li; Kaichang Di; Wenhui Wan; Jianfeng Xie; Man Peng; Baofeng Wang; Bin Liu; Mengna Jia; Luhua Xi; Rui Zhao. Vision based Chang’E-4 landing point localization. SCIENTIA SINICA Technologica 2020, 50, 41 -53.
AMA StyleJia Wang, Weiren Wu, Jian Li, Kaichang Di, Wenhui Wan, Jianfeng Xie, Man Peng, Baofeng Wang, Bin Liu, Mengna Jia, Luhua Xi, Rui Zhao. Vision based Chang’E-4 landing point localization. SCIENTIA SINICA Technologica. 2020; 50 (1):41-53.
Chicago/Turabian StyleJia Wang; Weiren Wu; Jian Li; Kaichang Di; Wenhui Wan; Jianfeng Xie; Man Peng; Baofeng Wang; Bin Liu; Mengna Jia; Luhua Xi; Rui Zhao. 2020. "Vision based Chang’E-4 landing point localization." SCIENTIA SINICA Technologica 50, no. 1: 41-53.
High-accuracy indoor positioning is a prerequisite to satisfy the increasing demands of position-based services in complex indoor scenes. Current indoor visual-positioning methods mainly include image retrieval-based methods, visual landmarks-based methods, and learning-based methods. To better overcome the limitations of traditional methods such as them being labor-intensive, of poor accuracy, and time-consuming, this paper proposes a novel indoor-positioning method with automated red, green, blue and depth (RGB-D) image database construction. First, strategies for automated database construction are developed to reduce the workload of manually selecting database images and ensure the requirements of high-accuracy indoor positioning. The database is automatically constructed according to the rules, which is more objective and improves the efficiency of the image-retrieval process. Second, by combining the automated database construction module, convolutional neural network (CNN)-based image-retrieval module, and strict geometric relations-based pose estimation module, we obtain a high-accuracy indoor-positioning system. Furthermore, in order to verify the proposed method, we conducted extensive experiments on the public indoor environment dataset. The detailed experimental results demonstrated the effectiveness and efficiency of our indoor-positioning method.
Runzhi Wang; Wenhui Wan; Kaichang Di; Ruilin Chen; Xiaoxue Feng. A High-Accuracy Indoor-Positioning Method with Automated RGB-D Image Database Construction. Remote Sensing 2019, 11, 2572 .
AMA StyleRunzhi Wang, Wenhui Wan, Kaichang Di, Ruilin Chen, Xiaoxue Feng. A High-Accuracy Indoor-Positioning Method with Automated RGB-D Image Database Construction. Remote Sensing. 2019; 11 (21):2572.
Chicago/Turabian StyleRunzhi Wang; Wenhui Wan; Kaichang Di; Ruilin Chen; Xiaoxue Feng. 2019. "A High-Accuracy Indoor-Positioning Method with Automated RGB-D Image Database Construction." Remote Sensing 11, no. 21: 2572.
In recent years, low-cost and lightweight RGB and depth (RGB-D) sensors, such as Microsoft Kinect, have made available rich image and depth data, making them very popular in the field of simultaneous localization and mapping (SLAM), which has been increasingly used in robotics, self-driving vehicles, and augmented reality. The RGB-D SLAM constructs 3D environmental models of natural landscapes while simultaneously estimating camera poses. However, in highly variable illumination and motion blur environments, long-distance tracking can result in large cumulative errors and scale shifts. To address this problem in actual applications, in this study, we propose a novel multithreaded RGB-D SLAM framework that incorporates a highly accurate prior terrestrial Light Detection and Ranging (LiDAR) point cloud, which can mitigate cumulative errors and improve the system’s robustness in large-scale and challenging scenarios. First, we employed deep learning to achieve system automatic initialization and motion recovery when tracking is lost. Next, we used terrestrial LiDAR point cloud to obtain prior data of the landscape, and then we applied the point-to-surface inductively coupled plasma (ICP) iterative algorithm to realize accurate camera pose control from the previously obtained LiDAR point cloud data, and finally expanded its control range in the local map construction. Furthermore, an innovative double window segment-based map optimization method is proposed to ensure consistency, better real-time performance, and high accuracy of map construction. The proposed method was tested for long-distance tracking and closed-loop in two different large indoor scenarios. The experimental results indicated that the standard deviation of the 3D map construction is 10 cm in a mapping distance of 100 m, compared with the LiDAR ground truth. Further, the relative cumulative error of the camera in closed-loop experiments is 0.09%, which is twice less than that of the typical SLAM algorithm (3.4%). Therefore, the proposed method was demonstrated to be more robust than the ORB-SLAM2 algorithm in complex indoor environments.
Xujie Kang; Jing Li; Xiangtao Fan; Wenhui Wan. Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation. Applied Sciences 2019, 9, 3264 .
AMA StyleXujie Kang, Jing Li, Xiangtao Fan, Wenhui Wan. Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation. Applied Sciences. 2019; 9 (16):3264.
Chicago/Turabian StyleXujie Kang; Jing Li; Xiangtao Fan; Wenhui Wan. 2019. "Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation." Applied Sciences 9, no. 16: 3264.
Simultaneous localization and mapping (SLAM) methods based on an RGB-D camera have been studied and used in robot navigation and perception. So far, most such SLAM methods have been applied to a static environment. However, these methods are incapable of avoiding the drift errors caused by moving objects such as pedestrians, which limits their practical performance in real-world applications. In this paper, a new RGB-D SLAM with moving object detection for dynamic indoor scenes is proposed. The proposed detection method for moving objects is based on mathematical models and geometric constraints, and it can be incorporated into the SLAM process as a data filtering process. In order to verify the proposed method, we conducted sufficient experiments on the public TUM RGB-D dataset and a sequence image dataset from our Kinect V1 camera; both were acquired in common dynamic indoor scenes. The detailed experimental results of our improved RGB-D SLAM were summarized and demonstrate its effectiveness in dynamic indoor scenes.
Runzhi Wang; Wenhui Wan; Yongkang Wang; Kaichang Di. A New RGB-D SLAM Method with Moving Object Detection for Dynamic Indoor Scenes. Remote Sensing 2019, 11, 1143 .
AMA StyleRunzhi Wang, Wenhui Wan, Yongkang Wang, Kaichang Di. A New RGB-D SLAM Method with Moving Object Detection for Dynamic Indoor Scenes. Remote Sensing. 2019; 11 (10):1143.
Chicago/Turabian StyleRunzhi Wang; Wenhui Wan; Yongkang Wang; Kaichang Di. 2019. "A New RGB-D SLAM Method with Moving Object Detection for Dynamic Indoor Scenes." Remote Sensing 11, no. 10: 1143.
In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features—point and line segments—have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.
Runzhi Wang; Kaichang Di; Wenhui Wan; Yongkang Wang. Improved Point-Line Feature Based Visual SLAM Method for Indoor Scenes. Sensors 2018, 18, 3559 .
AMA StyleRunzhi Wang, Kaichang Di, Wenhui Wan, Yongkang Wang. Improved Point-Line Feature Based Visual SLAM Method for Indoor Scenes. Sensors. 2018; 18 (10):3559.
Chicago/Turabian StyleRunzhi Wang; Kaichang Di; Wenhui Wan; Yongkang Wang. 2018. "Improved Point-Line Feature Based Visual SLAM Method for Indoor Scenes." Sensors 18, no. 10: 3559.
In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy.
Kaichang Di; Qiang Zhao; Wenhui Wan; Yexin Wang; Yunjun Gao. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information. Sensors 2016, 16, 1285 .
AMA StyleKaichang Di, Qiang Zhao, Wenhui Wan, Yexin Wang, Yunjun Gao. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information. Sensors. 2016; 16 (8):1285.
Chicago/Turabian StyleKaichang Di; Qiang Zhao; Wenhui Wan; Yexin Wang; Yunjun Gao. 2016. "RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information." Sensors 16, no. 8: 1285.
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method.
Kai Wu; Kaichang Di; Xun Sun; Wenhui Wan; Zhaoqin Liu. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation. Sensors 2014, 14, 4981 -5003.
AMA StyleKai Wu, Kaichang Di, Xun Sun, Wenhui Wan, Zhaoqin Liu. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation. Sensors. 2014; 14 (3):4981-5003.
Chicago/Turabian StyleKai Wu; Kaichang Di; Xun Sun; Wenhui Wan; Zhaoqin Liu. 2014. "Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation." Sensors 14, no. 3: 4981-5003.