This page has only limited features, please log in for full access.
The jumping–gliding robot is a kind of locomotion platform with the capabilities to jump on the ground and glide through the air. The jumping of this robot has to juggle the requirements of initial velocity and posture for entry to gliding and progressing on the ground. Inspired by flying squirrels, we proposed the concept of flexible wing-limb blending platform and designed a robot with two jumping modes. The robot can takeoff with different speeds and stances, and adjust aerial posture using the swing of forelimbs. To the best of our knowledge, this is the first miniature and bio-inspired jumping robot that can autonomically change the speeds and stances when takeoff. Experimental results show that the robot can takeoff at about 3 m/s and pitch angle of 0° in the mode of jumping for gliding and adjust the pitch angle at the top to 0°~10° by actuating the forelimbs swing according to the requirement of gliding. In the mode of jumping for progress, the robot can takeoff at about 2 m/s with a pitch angle of 20° and then intermittently jump with a distance of 0.37 m of once jump and an average progress speed of 0.2 m/s. The robot presented in this paper lays the foundation for the development of flexible wing-limb blending platform, which is capable of jumping and gliding.
Fei Zhao; Wei Wang; Justyna Wyrwa; Jingtao Zhang; Wenxin Du; Pengyu Zhong. Design and Demonstration of a Flying-Squirrel-Inspired Jumping Robot with Two Modes. Applied Sciences 2021, 11, 3362 .
AMA StyleFei Zhao, Wei Wang, Justyna Wyrwa, Jingtao Zhang, Wenxin Du, Pengyu Zhong. Design and Demonstration of a Flying-Squirrel-Inspired Jumping Robot with Two Modes. Applied Sciences. 2021; 11 (8):3362.
Chicago/Turabian StyleFei Zhao; Wei Wang; Justyna Wyrwa; Jingtao Zhang; Wenxin Du; Pengyu Zhong. 2021. "Design and Demonstration of a Flying-Squirrel-Inspired Jumping Robot with Two Modes." Applied Sciences 11, no. 8: 3362.
For the indoor navigation of service robots, human–robot interaction and adapting to the environment still need to be strengthened, including determining the navigation goal socially, improving the success rate of passing doors, and optimizing the path planning efficiency. This paper proposes an indoor navigation system based on object semantic grid and topological map, to optimize the above problems. First, natural language is used as a human–robot interaction form, from which the target room, object, and spatial relationship can be extracted by using speech recognition and word segmentation. Then, the robot selects the goal point from the target space by object affordance theory. To improve the navigation success rate and safety, we generate auxiliary navigation points on both sides of the door to correct the robot trajectory. Furthermore, based on the topological map and auxiliary navigation points, the global path is segmented into each topological area. The path planning algorithm is carried on respectively in every room, which significantly improves the navigation efficiency. This system has demonstrated to support autonomous navigation based on language interaction and significantly improve the safety, efficiency, and robustness of indoor robot navigation. Our system has been successfully tested in real domestic environments.
Jiadong Zhang; Wei Wang; Xianyu Qi; Ziwei Liao. Social and Robust Navigation for Indoor Robots Based on Object Semantic Grid and Topological Map. Applied Sciences 2020, 10, 8991 .
AMA StyleJiadong Zhang, Wei Wang, Xianyu Qi, Ziwei Liao. Social and Robust Navigation for Indoor Robots Based on Object Semantic Grid and Topological Map. Applied Sciences. 2020; 10 (24):8991.
Chicago/Turabian StyleJiadong Zhang; Wei Wang; Xianyu Qi; Ziwei Liao. 2020. "Social and Robust Navigation for Indoor Robots Based on Object Semantic Grid and Topological Map." Applied Sciences 10, no. 24: 8991.
Indoor service robots need to build an object-centric semantic map to understand and execute human instructions. Conventional visual simultaneous localization and mapping (SLAM) systems build a map using geometric features such as points, lines, and planes as landmarks. However, they lack a semantic understanding of the environment. This paper proposes an object-level semantic SLAM algorithm based on RGB-D data, which uses a quadric surface as an object model to compactly represent the object’s position, orientation, and shape. This paper proposes and derives two types of RGB-D camera-quadric observation models: a complete model and a partial model. The complete model combines object detection and point cloud data to estimate a complete ellipsoid in a single RGB-D frame. The partial model is activated when the depth data is severely missing because of illuminations or occlusions, which uses bounding boxes from object detection to constrain objects. Compared with the state-of-the-art quadric SLAM algorithms that use a monocular observation model, the RGB-D observation model reduces the requirements of the observation number and viewing angle changes, which helps improve the accuracy and robustness. This paper introduces a nonparametric pose graph to solve data associations in the back end, and innovatively applies it to the quadric surface model. We thoroughly evaluated the algorithm on two public datasets and an author-collected mobile robot dataset in a home-like environment. We obtained obvious improvements on the localization accuracy and mapping effects compared with two state-of-the-art object SLAM algorithms.
Ziwei Liao; Wei Wang; Xianyu Qi; Xiaoyu Zhang. RGB-D Object SLAM Using Quadrics for Indoor Environments. Sensors 2020, 20, 5150 .
AMA StyleZiwei Liao, Wei Wang, Xianyu Qi, Xiaoyu Zhang. RGB-D Object SLAM Using Quadrics for Indoor Environments. Sensors. 2020; 20 (18):5150.
Chicago/Turabian StyleZiwei Liao; Wei Wang; Xianyu Qi; Xiaoyu Zhang. 2020. "RGB-D Object SLAM Using Quadrics for Indoor Environments." Sensors 20, no. 18: 5150.
Occupied grid maps are sufficient for mobile robots to complete metric navigation tasks in domestic environments. However, they lack semantic information to endow the robots with the ability of social goal selection and human-friendly operation modes. In this paper, we propose an object semantic grid mapping system with 2D Light Detection and Ranging (LiDAR) and RGB-D sensors to solve this problem. At first, we use a laser-based Simultaneous Localization and Mapping (SLAM) to generate an occupied grid map and obtain a robot trajectory. Then, we employ object detection to get an object’s semantics of color images and use joint interpolation to refine camera poses. Based on object detection, depth images, and interpolated poses, we build a point cloud with object instances. To generate object-oriented minimum bounding rectangles, we propose a method for extracting the dominant directions of the room. Furthermore, we build object goal spaces to help the robots select navigation goals conveniently and socially. We have used the [email protected] dataset to verify the system; the verification results show that our system is effective.
Xianyu Qi; Wei Wang; Ziwei Liao; Xiaoyu Zhang; Dongsheng Yang; Ran Wei. Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation. Applied Sciences 2020, 10, 5782 .
AMA StyleXianyu Qi, Wei Wang, Ziwei Liao, Xiaoyu Zhang, Dongsheng Yang, Ran Wei. Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation. Applied Sciences. 2020; 10 (17):5782.
Chicago/Turabian StyleXianyu Qi; Wei Wang; Ziwei Liao; Xiaoyu Zhang; Dongsheng Yang; Ran Wei. 2020. "Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation." Applied Sciences 10, no. 17: 5782.
Simultaneous localization and mapping (SLAM) is a fundamental problem for various applications. For indoor environments, planes are predominant features that are less affected by measurement noise. In this paper, we propose a novel point-plane SLAM system using RGB-D cameras. First, we extract feature points from RGB images and planes from depth images. Then plane correspondences in the global map can be found using their contours. Considering the limited size of real planes, we exploit constraints of plane edges. In general, a plane edge is an intersecting line of two perpendicular planes. Therefore, instead of line-based constraints, we calculate and generate supposed perpendicular planes from edge lines, resulting in more plane observations and constraints to reduce estimation errors. To exploit the orthogonal structure in indoor environments, we also add structural (parallel or perpendicular) constraints of planes. Finally, we construct a factor graph using all of these features. The cost functions are minimized to estimate camera poses and global map. We test our proposed system on public RGB-D benchmarks, demonstrating its robust and accurate pose estimation results, compared with other state-of-the-art SLAM systems.
Xiaoyu Zhang; Wei Wang; Xianyu Qi; Ziwei Liao; Ran Wei. Point-Plane SLAM Using Supposed Planes for Indoor Environments. Sensors 2019, 19, 3795 .
AMA StyleXiaoyu Zhang, Wei Wang, Xianyu Qi, Ziwei Liao, Ran Wei. Point-Plane SLAM Using Supposed Planes for Indoor Environments. Sensors. 2019; 19 (17):3795.
Chicago/Turabian StyleXiaoyu Zhang; Wei Wang; Xianyu Qi; Ziwei Liao; Ran Wei. 2019. "Point-Plane SLAM Using Supposed Planes for Indoor Environments." Sensors 19, no. 17: 3795.
As more and more social robots are applied in human-populated environments, they need an affective model to communicate with human beings naturally and believably. In addition, the model should be flexible to be applied in different areas, such as entertainment and education, and can be easily understood and operated by robot designers. To meet these requirements, we propose an affective model including emotions, moods and personality traits for social robots to mimic the affect changes of human beings. Inspired by the Plutchik’s Wheel of Emotions, we first construct an affective space which can simultaneously represent the affective concepts. According to the affective space, the model can be visualized vividly and easily understood. We then describe the interaction among these concepts to change the robot states to make the robot interact with human beings naturally and believably. By tuning the parameters of the model, it can be flexibly applied in different areas. We evaluate the proposed model in simulation and human-robot interaction experiments and the experimental results show that the model is effective.
Xianyu Qi; Wei Wang; Lei Guo; Mingbo Li; Xiaoyu Zhang; Ran Wei. Building a Plutchik’s Wheel Inspired Affective Model for Social Robots. Journal of Bionic Engineering 2019, 16, 209 -221.
AMA StyleXianyu Qi, Wei Wang, Lei Guo, Mingbo Li, Xiaoyu Zhang, Ran Wei. Building a Plutchik’s Wheel Inspired Affective Model for Social Robots. Journal of Bionic Engineering. 2019; 16 (2):209-221.
Chicago/Turabian StyleXianyu Qi; Wei Wang; Lei Guo; Mingbo Li; Xiaoyu Zhang; Ran Wei. 2019. "Building a Plutchik’s Wheel Inspired Affective Model for Social Robots." Journal of Bionic Engineering 16, no. 2: 209-221.
The concept of a modular climbing caterpillar robot is inspired by the kinematics of real caterpillars. Two typical kinematics models and gaits are investigated based on the crawling motion of the inchworm and the tobacco hornworm. Due to the fixed constraints between the suckers and the wall, the gait of a caterpillar robot engages a changing kinematic chain which is from an open chain to a closed chain, and then to an open chain in order. During the open chain periods, an unsymmetrical phase method (UPM) is used to ensure the reliable attachment of the passive suckers to the wall. In the closed-chain state, a four-link kinematics model is adopted to fulfill the fixed constraints. By combining the two methods together, the complete joint control trajectories are acquired for a modular caterpillar robot with seven joints. At last, on-site tests confirm the proposed principles and the validity of the climbing gait.
Wei Wang; Kun Wang; Houxiang Zhang. Crawling gait realization of the mini-modular climbing caterpillar robot. Progress in Natural Science 2009, 19, 1821 -1829.
AMA StyleWei Wang, Kun Wang, Houxiang Zhang. Crawling gait realization of the mini-modular climbing caterpillar robot. Progress in Natural Science. 2009; 19 (12):1821-1829.
Chicago/Turabian StyleWei Wang; Kun Wang; Houxiang Zhang. 2009. "Crawling gait realization of the mini-modular climbing caterpillar robot." Progress in Natural Science 19, no. 12: 1821-1829.