This page has only limited features, please log in for full access.

Prof. Dr. Pascual Campoy
Universidad Politécnica de Madrid - Escuela Técnica Superior de Ingenieros Industriales

Basic Info


Research Keywords & Expertise

0 UAV
0 control
0 machine learning
0 Vision for Robotics
0 Aerial Robotics

Fingerprints

control
UAV
Aerial Robotics
machine learning

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Technical note
Published: 05 May 2021 in Remote Sensing
Reads 0
Downloads 0

Collision-avoidance is a crucial research topic in robotics. Designing a collision-avoidance algorithm is still a challenging and open task, because of the requirements for navigating in unstructured and dynamic environments using limited payload and computing resources on board micro aerial vehicles. This article presents a novel depth-based collision-avoidance method for aerial robots, enabling high-speed flights in dynamic environments. First of all, a depth-based Euclidean distance field mapping algorithm is generated. Then, the proposed Euclidean distance field mapping strategy is integrated with a rapid-exploration random tree to construct a collision-avoidance system. The experimental results show that the proposed collision-avoidance algorithm has a robust performance at high flight speeds in challenging dynamic environments. The experimental results show that the proposed collision-avoidance algorithm can perform faster collision-avoidance maneuvers when compared to the state-of-art algorithms (the average computing time of the collision maneuver is 25.4 ms, while the minimum computing time is 10.4 ms). The average computing time is six times faster than one baseline algorithm. Additionally, fully autonomous flight experiments are also conducted for validating the presented collision-avoidance approach.

ACS Style

Liang Lu; Adrian Carrio; Carlos Sampedro; Pascual Campoy. A Robust and Fast Collision-Avoidance Approach for Micro Aerial Vehicles Using a Depth Sensor. Remote Sensing 2021, 13, 1796 .

AMA Style

Liang Lu, Adrian Carrio, Carlos Sampedro, Pascual Campoy. A Robust and Fast Collision-Avoidance Approach for Micro Aerial Vehicles Using a Depth Sensor. Remote Sensing. 2021; 13 (9):1796.

Chicago/Turabian Style

Liang Lu; Adrian Carrio; Carlos Sampedro; Pascual Campoy. 2021. "A Robust and Fast Collision-Avoidance Approach for Micro Aerial Vehicles Using a Depth Sensor." Remote Sensing 13, no. 9: 1796.

Journal article
Published: 06 February 2021 in Sensors
Reads 0
Downloads 0

The paper addresses the loop shaping problem in the altitude control of an unmanned aerial vehicle to land the flying robot with a specific landing scenario adopted. The proposed solution is optimal, in the sense of the selected performance indices, namely minimum-time, minimum-energy, and velocity-penalized related functions, achieving their minimal values, with numerous experiments conducted throughout the development and preparation to the Mohamed Bin Zayed International Robotics Challenge (MBZIRC 2020). A novel approach to generation of a reference altitude trajectory is presented, which is then tracked in a standard, though optimized, control loop. Three landing scenarios are considered, namely: minimum-time, minimum-energy, and velocity-penalized landing scenarios. The experimental results obtained with the use of the Simulink Support Package for Parrot Minidrones, and the OptiTrack motion capture system proved the effectiveness of the proposed approach.

ACS Style

Dariusz Horla; Wojciech Giernacki; Jacek Cieślak; Pascual Campoy. Altitude Measurement-Based Optimization of the Landing Process of UAVs. Sensors 2021, 21, 1151 .

AMA Style

Dariusz Horla, Wojciech Giernacki, Jacek Cieślak, Pascual Campoy. Altitude Measurement-Based Optimization of the Landing Process of UAVs. Sensors. 2021; 21 (4):1151.

Chicago/Turabian Style

Dariusz Horla; Wojciech Giernacki; Jacek Cieślak; Pascual Campoy. 2021. "Altitude Measurement-Based Optimization of the Landing Process of UAVs." Sensors 21, no. 4: 1151.

Journal article
Published: 14 November 2020 in Sensors
Reads 0
Downloads 0

Aerial robots are widely used in search and rescue applications because of their small size and high maneuvering. However, designing an autonomous exploration algorithm is still a challenging and open task, because of the limited payload and computing resources on board UAVs. This paper presents an autonomous exploration algorithm for the aerial robots that shows several improvements for being used in the search and rescue tasks. First of all, an RGB-D sensor is used to receive information from the environment and the OctoMap divides the environment into obstacles, free and unknown spaces. Then, a clustering algorithm is used to filter the frontiers extracted from the OctoMap, and an information gain based cost function is applied to choose the optimal frontier. At last, the feasible path is given by A* path planner and a safe corridor generation algorithm. The proposed algorithm has been tested and compared with baseline algorithms in three different environments with the map resolutions of 0.2 m, and 0.3 m. The experimental results show that the proposed algorithm has a shorter exploration path and can save more exploration time when compared with the state of the art. The algorithm has also been validated in the real flight experiments.

ACS Style

Liang Lu; Carlos Redondo; Pascual Campoy. Optimal Frontier-Based Autonomous Exploration in Unconstructed Environment Using RGB-D Sensor. Sensors 2020, 20, 6507 .

AMA Style

Liang Lu, Carlos Redondo, Pascual Campoy. Optimal Frontier-Based Autonomous Exploration in Unconstructed Environment Using RGB-D Sensor. Sensors. 2020; 20 (22):6507.

Chicago/Turabian Style

Liang Lu; Carlos Redondo; Pascual Campoy. 2020. "Optimal Frontier-Based Autonomous Exploration in Unconstructed Environment Using RGB-D Sensor." Sensors 20, no. 22: 6507.

Journal article
Published: 01 July 2020 in IEEE Access
Reads 0
Downloads 0

Recent object detection studies have been focused on video sequences, mostly due to the increasing demand of industrial applications. Although single-image architectures achieve remarkable results in terms of accuracy, they do not take advantage of particular properties of the video sequences and usually require high parallel computational resources, such as desktop GPUs. In this work, an inattentional framework is proposed, where the object context in video frames is dynamically reused in order to reduce the computation overhead. The context features corresponding to keyframes are fused into a synthetic feature map, which is further refined using temporal aggregation with ConvLSTMs. Furthermore, an inattentional policy has been learned to adaptively balance the accuracy and the amount of context reused. The inattentional policy has been learned under the reinforcement learning paradigm, and using our novel reward-conditional training scheme, which allows for policy training over a whole distribution of reward functions and enables the selection of a unique reward function at inference time. Our framework shows outstanding results on platforms with reduced parallelization capabilities, such as CPUs, achieving an average latency reduction up to 2.09x, and obtaining FPS rates similar to their equivalent GPU platform, at the cost of a 1.11x mAP reduction.

ACS Style

Alejandro Rodriguez-Ramos; Javier Rodriguez-Vazquez; Carlos Sampedro; Pascual Campoy. Adaptive Inattentional Framework for Video Object Detection With Reward-Conditional Training. IEEE Access 2020, 8, 124451 -124466.

AMA Style

Alejandro Rodriguez-Ramos, Javier Rodriguez-Vazquez, Carlos Sampedro, Pascual Campoy. Adaptive Inattentional Framework for Video Object Detection With Reward-Conditional Training. IEEE Access. 2020; 8 (99):124451-124466.

Chicago/Turabian Style

Alejandro Rodriguez-Ramos; Javier Rodriguez-Vazquez; Carlos Sampedro; Pascual Campoy. 2020. "Adaptive Inattentional Framework for Video Object Detection With Reward-Conditional Training." IEEE Access 8, no. 99: 124451-124466.

Journal article
Published: 24 March 2020 in IEEE Access
Reads 0
Downloads 0

Indoor environments have abundant presence of high-level semantic information which can provide a better understanding of the environment for robots to improve the uncertainty in their pose estimate. Although semantic information has proved to be useful, there are several challenges faced by the research community to accurately perceive, extract and utilize such semantic information from the environment. In order to address these challenges, in this paper we present a lightweight and real-time visual semantic SLAM framework running on board aerial robotic platforms. This novel method combines low level visual/visual-inertial odometry (VO/VIO) along with geometrical information corresponding to planar surfaces extracted from detected semantic objects. Extracting the planar surfaces from selected semantic objects provides enhanced robustness and makes it possible to precisely improve the metric estimates rapidly, simultaneously generalizing to several object instances irrespective of their shape and size. Our graph-based approach can integrate several state of the art VO/VIO algorithms along with the state of the art object detectors in order to estimate the complete 6DoF pose of the robot while simultaneously creating a sparse semantic map of the environment. No prior knowledge of the objects is required, which is a significant advantage over other works. We test our approach on a standard RGB-D dataset comparing its performance with the state of the art SLAM algorithms. We also perform several challenging indoor experiments validating our approach in presence of distinct environmental conditions and furthermore test it on board an aerial robot.

ACS Style

Hriday Bavle; Paloma De La Puente; Jonathan P. How; Pascual Campoy. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems. IEEE Access 2020, 8, 60704 -60718.

AMA Style

Hriday Bavle, Paloma De La Puente, Jonathan P. How, Pascual Campoy. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems. IEEE Access. 2020; 8 (99):60704-60718.

Chicago/Turabian Style

Hriday Bavle; Paloma De La Puente; Jonathan P. How; Pascual Campoy. 2020. "VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems." IEEE Access 8, no. 99: 60704-60718.

Journal article
Published: 05 February 2020 in IEEE Access
Reads 0
Downloads 0

Obstacle avoidance is a key feature for safe drone navigation. While solutions are already commercially available for static obstacle avoidance, systems enabling avoidance of dynamic objects, such as drones, are much harder to develop due to the efficient perception, planning and control capabilities required, particularly in small drones with constrained takeoff weights. For reasonable performance, obstacle detection systems should be capable of running in real-time, with sufficient field-of-view (FOV) and detection range, and ideally providing relative position estimates of potential obstacles. In this work, we achieve all of these requirements by proposing a novel strategy to perform onboard drone detection and localization using depth maps. We integrate it on a small quadrotor, thoroughly evaluate its performance through several flight experiments, and demonstrate its capability to simultaneously detect and localize drones of different sizes and shapes. In particular, our stereo-based approach runs onboard a small drone at 16 Hz, detecting drones at a maximum distance of 8 meters, with a maximum error of 10% of the distance and at relative speeds up to 2.3 m/s. The approach is directly applicable to other 3D sensing technologies with higher range and accuracy, such as 3D LIDAR.

ACS Style

Adrian Carrio; Jesus Tordesillas; Sai Vemprala; Srikanth Saripalli; Pascual Campoy; Jonathan P. How. Onboard Detection and Localization of Drones Using Depth Maps. IEEE Access 2020, 8, 30480 -30490.

AMA Style

Adrian Carrio, Jesus Tordesillas, Sai Vemprala, Srikanth Saripalli, Pascual Campoy, Jonathan P. How. Onboard Detection and Localization of Drones Using Depth Maps. IEEE Access. 2020; 8 (99):30480-30490.

Chicago/Turabian Style

Adrian Carrio; Jesus Tordesillas; Sai Vemprala; Srikanth Saripalli; Pascual Campoy; Jonathan P. How. 2020. "Onboard Detection and Localization of Drones Using Depth Maps." IEEE Access 8, no. 99: 30480-30490.

Research article
Published: 01 January 2020 in International Journal of Micro Air Vehicles
Reads 0
Downloads 0

This paper presents a novel collision-free navigation system for the unmanned aerial vehicle based on point clouds that outperform compared to baseline methods, enabling high-speed flights in cluttered environments, such as forests or many indoor industrial plants. The algorithm takes the point cloud information from physical sensors (e.g. lidar, depth camera) and then converts it to an occupied map using Voxblox, which is then used by a rapid-exploring random tree to generate finite path candidates. A modified Covariant Hamiltonian Optimization for Motion Planning objective function is used to select the best candidate and update it. Finally, the best candidate trajectory is generated and sent to a Model Predictive Control controller. The proposed navigation strategy is evaluated in four different simulation environments; the results show that the proposed method has a better success rate and a shorter goal-reaching distance than the baseline method.

ACS Style

Liang Lu; Alexander Yunda; Adrian Carrio; Pascual Campoy. Robust autonomous flight in cluttered environment using a depth sensor. International Journal of Micro Air Vehicles 2020, 12, 1 .

AMA Style

Liang Lu, Alexander Yunda, Adrian Carrio, Pascual Campoy. Robust autonomous flight in cluttered environment using a depth sensor. International Journal of Micro Air Vehicles. 2020; 12 ():1.

Chicago/Turabian Style

Liang Lu; Alexander Yunda; Adrian Carrio; Pascual Campoy. 2020. "Robust autonomous flight in cluttered environment using a depth sensor." International Journal of Micro Air Vehicles 12, no. : 1.

Journal article
Published: 24 December 2019 in IEEE/ASME Transactions on Mechatronics
Reads 0
Downloads 0

In this paper, we propose two event-based model predictive control (MPC) schemes with adaptive prediction horizon for tracking control of unicycle robots with additive disturbances. The schemes are able to reduce the computational burden from two aspects: reducing the frequency of solving the optimization control problem (OCP) in MPC to relieve the computational load and decreasing the prediction horizon to decline the computational complexity. Event-triggered and self-triggered mechanisms are developed to activate the OCP solver aperiodically, and a prediction horizon update strategy is presented to decrease the dimension of the OCP in each step. The proposed schemes are tested on a networked platform to show their efficiency.

ACS Style

Zhongqi Sun; Yuanqing Xia; Li Dai; Pascual Campoy. Tracking of Unicycle Robots Using Event-Based MPC With Adaptive Prediction Horizon. IEEE/ASME Transactions on Mechatronics 2019, 25, 739 -749.

AMA Style

Zhongqi Sun, Yuanqing Xia, Li Dai, Pascual Campoy. Tracking of Unicycle Robots Using Event-Based MPC With Adaptive Prediction Horizon. IEEE/ASME Transactions on Mechatronics. 2019; 25 (2):739-749.

Chicago/Turabian Style

Zhongqi Sun; Yuanqing Xia; Li Dai; Pascual Campoy. 2019. "Tracking of Unicycle Robots Using Event-Based MPC With Adaptive Prediction Horizon." IEEE/ASME Transactions on Mechatronics 25, no. 2: 739-749.

Journal article
Published: 04 November 2019 in Sensors
Reads 0
Downloads 0

Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights).

ACS Style

Alejandro Rodriguez-Ramos; Adrian Alvarez-Fernandez; Hriday Bavle; Pascual Campoy; Jonathan P. How. Vision-Based Multirotor Following Using Synthetic Learning Techniques. Sensors 2019, 19, 4794 .

AMA Style

Alejandro Rodriguez-Ramos, Adrian Alvarez-Fernandez, Hriday Bavle, Pascual Campoy, Jonathan P. How. Vision-Based Multirotor Following Using Synthetic Learning Techniques. Sensors. 2019; 19 (21):4794.

Chicago/Turabian Style

Alejandro Rodriguez-Ramos; Adrian Alvarez-Fernandez; Hriday Bavle; Pascual Campoy; Jonathan P. How. 2019. "Vision-Based Multirotor Following Using Synthetic Learning Techniques." Sensors 19, no. 21: 4794.

Research article
Published: 17 October 2018 in International Journal of Micro Air Vehicles
Reads 0
Downloads 0

The lack of redundant attitude sensors represents a considerable yet common vulnerability in many low-cost unmanned aerial vehicles. In addition to the use of attitude sensors, exploiting the horizon as a visual reference for attitude control is part of human pilots’ training. For this reason, and given the desirable properties of image sensors, quite a lot of research has been conducted proposing the use of vision sensors for horizon detection in order to obtain redundant attitude estimation onboard unmanned aerial vehicles. However, atmospheric and illumination conditions may hinder the operability of visible light image sensors, or even make their use impractical, such as during the night. Thermal infrared image sensors have a much wider range of operation conditions and their price has greatly decreased during the last years, becoming an alternative to visible spectrum sensors in certain operation scenarios. In this paper, two attitude estimation methods are proposed. The first method consists of a novel approach to estimate the line that best fits the horizon in a thermal image. The resulting line is then used to estimate the pitch and roll angles using an infinite horizon line model. The second method uses deep learning to predict attitude angles using raw pixel intensities from a thermal image. For this, a novel Convolutional Neural Network architecture has been trained using measurements from an inertial navigation system. Both methods presented are proven to be valid for redundant attitude estimation, providing RMS errors below 1.7° and running at up to 48 Hz, depending on the chosen method, the input image resolution and the available computational capabilities.

ACS Style

Adrian Carrio; Hriday Bavle; Pascual Campoy. Attitude estimation using horizon detection in thermal images. International Journal of Micro Air Vehicles 2018, 10, 352 -361.

AMA Style

Adrian Carrio, Hriday Bavle, Pascual Campoy. Attitude estimation using horizon detection in thermal images. International Journal of Micro Air Vehicles. 2018; 10 (4):352-361.

Chicago/Turabian Style

Adrian Carrio; Hriday Bavle; Pascual Campoy. 2018. "Attitude estimation using horizon detection in thermal images." International Journal of Micro Air Vehicles 10, no. 4: 352-361.

Conference paper
Published: 01 October 2018 in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Reads 0
Downloads 0

In this paper we propose a particle filter localization approach, based on stereo visual odometry (VO) and semantic information from indoor environments, for mini-aerial robots. The prediction stage of the particle filter is performed using the 3D pose of the aerial robot estimated by the stereo VO algorithm. This predicted 3D pose is updated using inertial as well as semantic measurements. The algorithm processes semantic measurements in two phases; firstly, a pre-trained deep learning (DL) based object detector is used for real time object detections in the RGB spectrum. Secondly, from the corresponding 3D point clouds of the detected objects, we segment their dominant horizontal plane and estimate their relative position, also augmenting a prior map with new detections. The augmented map is then used in order to obtain a drift free pose estimate of the aerial robot. We validate our approach in several real flight experiments where we compare it against ground truth and a state of the art visual SLAM approach.

ACS Style

Hriday Bavle; Stephan Manthe; Paloma de la Puente; Alejandro Rodriguez-Ramos; Carlos Sampedro; Pascual Campoy. Stereo Visual Odometry and Semantics based Localization of Aerial Robots in Indoor Environments. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018, 1018 -1023.

AMA Style

Hriday Bavle, Stephan Manthe, Paloma de la Puente, Alejandro Rodriguez-Ramos, Carlos Sampedro, Pascual Campoy. Stereo Visual Odometry and Semantics based Localization of Aerial Robots in Indoor Environments. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018; ():1018-1023.

Chicago/Turabian Style

Hriday Bavle; Stephan Manthe; Paloma de la Puente; Alejandro Rodriguez-Ramos; Carlos Sampedro; Pascual Campoy. 2018. "Stereo Visual Odometry and Semantics based Localization of Aerial Robots in Indoor Environments." 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , no. : 1018-1023.

Conference paper
Published: 01 October 2018 in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Reads 0
Downloads 0

Deep learning techniques for motion control have recently been qualitatively improved, since the successful application of Deep Q- Learning to the continuous action domain in Atari-like games. Based on these ideas, Deep Deterministic Policy Gradients (DDPG) algorithm was able to provide impressive results in continuous state and action domains, which are closely linked to most of the robotics-related tasks. In this paper, a vision-based autonomous multirotor landing maneuver on top of a moving platform is presented. The behaviour has been completely learned in simulation without prior human knowledge and by means of deep reinforcement learning techniques. Since the multirotor is controlled in attitude, no high level state estimation is required. The complete behaviour has been trained with continuous action and state spaces, and has provided proper results (landing at a maximum velocity of 2 m/s), Furthermore, it has been validated in a wide variety of conditions, for both simulated and real-flight scenarios, using a low-cost, lightweight and out-of-the-box consumer multirotor.

ACS Style

Alejandro Rodriguez-Ramos; Carlos Sampedro; Hriday Bavle; Ignacio Gil Moreno; Pascual Campoy. A Deep Reinforcement Learning Technique for Vision-Based Autonomous Multirotor Landing on a Moving Platform. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018, 1010 -1017.

AMA Style

Alejandro Rodriguez-Ramos, Carlos Sampedro, Hriday Bavle, Ignacio Gil Moreno, Pascual Campoy. A Deep Reinforcement Learning Technique for Vision-Based Autonomous Multirotor Landing on a Moving Platform. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018; ():1010-1017.

Chicago/Turabian Style

Alejandro Rodriguez-Ramos; Carlos Sampedro; Hriday Bavle; Ignacio Gil Moreno; Pascual Campoy. 2018. "A Deep Reinforcement Learning Technique for Vision-Based Autonomous Multirotor Landing on a Moving Platform." 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , no. : 1010-1017.

Conference paper
Published: 01 October 2018 in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Reads 0
Downloads 0

Obstacle avoidance is a key feature for safe Unmanned Aerial Vehicle (UAV) navigation. While solutions have been proposed for static obstacle avoidance, systems enabling avoidance of dynamic objects, such as drones, are hard to implement due to the detection range and field-of-view (FOV) requirements, as well as the constraints for integrating such systems on-board small UAVs. In this work, a dataset of 6k synthetic depth maps of drones has been generated and used to train a state-of-the-art deep learning-based drone detection model. While many sensing technologies can only provide relative altitude and azimuth of an obstacle, our depth map-based approach enables full 3D localization of the obstacle. This is extremely useful for collision avoidance, as 3D localization of detected drones is key to perform efficient collision-free path planning. The proposed detection technique has been validated in several real depth map sequences, with multiple types of drones flying at up to 2 m/s, achieving an average precision of 98.7 %, an average recall of 74.7 % and a record detection range of 9.5 meters.

ACS Style

Adrian Carrio; Sai Vemprala; Andres Ripoll; Srikanth Saripalli; Pascual Campoy. Drone Detection Using Depth Maps. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018, 1034 -1037.

AMA Style

Adrian Carrio, Sai Vemprala, Andres Ripoll, Srikanth Saripalli, Pascual Campoy. Drone Detection Using Depth Maps. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018; ():1034-1037.

Chicago/Turabian Style

Adrian Carrio; Sai Vemprala; Andres Ripoll; Srikanth Saripalli; Pascual Campoy. 2018. "Drone Detection Using Depth Maps." 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , no. : 1034-1037.

Conference paper
Published: 01 October 2018 in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Reads 0
Downloads 0

Navigation in unknown indoor environments with fast collision avoidance capabilities is an ongoing research topic. Traditional motion planning algorithms rely on precise maps of the environment, where re-adapting a generated path can be highly demanding in terms of computational cost. In this paper, we present a fast reactive navigation algorithm using Deep Reinforcement Learning applied to multi rotor aerial robots. Taking as input the 2D-laser range measurements and the relative position of the aerial robot with respect to the desired goal, the proposed algorithm is successfully trained in a Gazebo-based simulation scenario by adopting an artificial potential field formulation. A thorough evaluation of the trained agent has been carried out both in simulated and real indoor scenarios, showing the appropriate reactive navigation behavior of the agent in the presence of static and dynamic obstacles.

ACS Style

Carlos Sampedro; Hriday Bavle; Alejandro Rodriguez-Ramos; Paloma de la Puente; Pascual Campoy. Laser-Based Reactive Navigation for Multirotor Aerial Robots using Deep Reinforcement Learning. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018, 1024 -1031.

AMA Style

Carlos Sampedro, Hriday Bavle, Alejandro Rodriguez-Ramos, Paloma de la Puente, Pascual Campoy. Laser-Based Reactive Navigation for Multirotor Aerial Robots using Deep Reinforcement Learning. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018; ():1024-1031.

Chicago/Turabian Style

Carlos Sampedro; Hriday Bavle; Alejandro Rodriguez-Ramos; Paloma de la Puente; Pascual Campoy. 2018. "Laser-Based Reactive Navigation for Multirotor Aerial Robots using Deep Reinforcement Learning." 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , no. : 1024-1031.

Journal article
Published: 06 September 2018 in Aerospace
Reads 0
Downloads 0

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.

ACS Style

Hriday Bavle; Jose Luis Sanchez-Lopez; Paloma De La Puente; Alejandro Rodriguez-Ramos; Carlos Sampedro; Pascual Campoy. Fast and Robust Flight Altitude Estimation of Multirotor UAVs in Dynamic Unstructured Environments Using 3D Point Cloud Sensors. Aerospace 2018, 5, 94 .

AMA Style

Hriday Bavle, Jose Luis Sanchez-Lopez, Paloma De La Puente, Alejandro Rodriguez-Ramos, Carlos Sampedro, Pascual Campoy. Fast and Robust Flight Altitude Estimation of Multirotor UAVs in Dynamic Unstructured Environments Using 3D Point Cloud Sensors. Aerospace. 2018; 5 (3):94.

Chicago/Turabian Style

Hriday Bavle; Jose Luis Sanchez-Lopez; Paloma De La Puente; Alejandro Rodriguez-Ramos; Carlos Sampedro; Pascual Campoy. 2018. "Fast and Robust Flight Altitude Estimation of Multirotor UAVs in Dynamic Unstructured Environments Using 3D Point Cloud Sensors." Aerospace 5, no. 3: 94.

Preprint
Published: 01 August 2018
Reads 0
Downloads 0
ACS Style

Adrian Carrio; Sai Vemprala; Andres Ripoll; Srikanth Saripalli; Pascual Campoy. Drone Detection Using Depth Maps. 2018, 1 .

AMA Style

Adrian Carrio, Sai Vemprala, Andres Ripoll, Srikanth Saripalli, Pascual Campoy. Drone Detection Using Depth Maps. . 2018; ():1.

Chicago/Turabian Style

Adrian Carrio; Sai Vemprala; Andres Ripoll; Srikanth Saripalli; Pascual Campoy. 2018. "Drone Detection Using Depth Maps." , no. : 1.

Article
Published: 03 July 2018 in Journal of Intelligent & Robotic Systems
Reads 0
Downloads 0

Search and Rescue (SAR) missions represent an important challenge in the robotics research field as they usually involve exceedingly variable-nature scenarios which require a high-level of autonomy and versatile decision-making capabilities. This challenge becomes even more relevant in the case of aerial robotic platforms owing to their limited payload and computational capabilities. In this paper, we present a fully-autonomous aerial robotic solution, for executing complex SAR missions in unstructured indoor environments. The proposed system is based on the combination of a complete hardware configuration and a flexible system architecture which allows the execution of high-level missions in a fully unsupervised manner (i.e. without human intervention). In order to obtain flexible and versatile behaviors from the proposed aerial robot, several learning-based capabilities have been integrated for target recognition and interaction. The target recognition capability includes a supervised learning classifier based on a computationally-efficient Convolutional Neural Network (CNN) model trained for target/background classification, while the capability to interact with the target for rescue operations introduces a novel Image-Based Visual Servoing (IBVS) algorithm which integrates a recent deep reinforcement learning method named Deep Deterministic Policy Gradients (DDPG). In order to train the aerial robot for performing IBVS tasks, a reinforcement learning framework has been developed, which integrates a deep reinforcement learning agent (e.g. DDPG) with a Gazebo-based simulator for aerial robotics. The proposed system has been validated in a wide range of simulation flights, using Gazebo and PX4 Software-In-The-Loop, and real flights in cluttered indoor environments, demonstrating the versatility of the proposed system in complex SAR missions.

ACS Style

Carlos Sampedro; Alejandro Rodriguez-Ramos; Hriday Bavle; Adrian Carrio; Paloma De La Puente; Pascual Campoy. A Fully-Autonomous Aerial Robot for Search and Rescue Applications in Indoor Environments using Learning-Based Techniques. Journal of Intelligent & Robotic Systems 2018, 95, 601 -627.

AMA Style

Carlos Sampedro, Alejandro Rodriguez-Ramos, Hriday Bavle, Adrian Carrio, Paloma De La Puente, Pascual Campoy. A Fully-Autonomous Aerial Robot for Search and Rescue Applications in Indoor Environments using Learning-Based Techniques. Journal of Intelligent & Robotic Systems. 2018; 95 (2):601-627.

Chicago/Turabian Style

Carlos Sampedro; Alejandro Rodriguez-Ramos; Hriday Bavle; Adrian Carrio; Paloma De La Puente; Pascual Campoy. 2018. "A Fully-Autonomous Aerial Robot for Search and Rescue Applications in Indoor Environments using Learning-Based Techniques." Journal of Intelligent & Robotic Systems 95, no. 2: 601-627.

Article
Published: 03 July 2018 in Journal of Intelligent & Robotic Systems
Reads 0
Downloads 0

The use of multi-rotor UAVs in industrial and civil applications has been extensively encouraged by the rapid innovation in all the technologies involved. In particular, deep learning techniques for motion control have recently taken a major qualitative step, since the successful application of Deep Q-Learning to the continuous action domain in Atari-like games. Based on these ideas, Deep Deterministic Policy Gradients (DDPG) algorithm was able to provide outstanding results with continuous state and action domains, which are a requirement in most of the robotics-related tasks. In this context, the research community is lacking the integration of realistic simulation systems with the reinforcement learning paradigm, enabling the application of deep reinforcement learning algorithms to the robotics field. In this paper, a versatile Gazebo-based reinforcement learning framework has been designed and validated with a continuous UAV landing task. The UAV landing maneuver on a moving platform has been solved by means of the novel DDPG algorithm, which has been integrated in our reinforcement learning framework. Several experiments have been performed in a wide variety of conditions for both simulated and real flights, demonstrating the generality of the approach. As an indirect result, a powerful work flow for robotics has been validated, where robots can learn in simulation and perform properly in real operation environments. To the best of the authors knowledge, this is the first work that addresses the continuous UAV landing maneuver on a moving platform by means of a state-of-the-art deep reinforcement learning algorithm, trained in simulation and tested in real flights.

ACS Style

Alejandro Rodriguez-Ramos; Carlos Sampedro; Hriday Bavle; Paloma de la Puente; Pascual Campoy. A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Moving Platform. Journal of Intelligent & Robotic Systems 2018, 93, 351 -366.

AMA Style

Alejandro Rodriguez-Ramos, Carlos Sampedro, Hriday Bavle, Paloma de la Puente, Pascual Campoy. A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Moving Platform. Journal of Intelligent & Robotic Systems. 2018; 93 (1-2):351-366.

Chicago/Turabian Style

Alejandro Rodriguez-Ramos; Carlos Sampedro; Hriday Bavle; Paloma de la Puente; Pascual Campoy. 2018. "A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Moving Platform." Journal of Intelligent & Robotic Systems 93, no. 1-2: 351-366.

Journal article
Published: 01 May 2018 in Engineering Applications of Artificial Intelligence
Reads 0
Downloads 0

A large amount of data, provided in the form of video data, is acquired during manned inspections flights of electric power lines. This data is analyzed by expert human inspectors to detect faults in the power lines infrastructure and prepare the inspection reports. This process is extremely time consuming, very expensive and prone to human error. In this paper, we present PoLIS: the Power Line Inspection Software, which has been developed with the objective of assisting the analysis of the data acquired during inspection flights. PoLIS is based on the cooperation between computer vision and machine learning techniques to automatically process video sequences acquired during inspection flights, resulting in a set of representative images per electric tower which we call Key Frames. These representative images can then be used for inspection purposes, leading to a drastic reduction of the human operators’ workload. At the core of the strategy lies an electric tower detector, which is in charge of estimating the location of the towers within the images based on the combination of a sliding window search technique and a supervised classifier. The location of the tower is then tracked using a tracking-by-registration algorithm based on direct methods, estimating the position of the tower in different images. Finally, different criteria are applied for defining whether the image corresponds to a Key Frame image or not. Extensive evaluation of the proposed strategy is conducted using videos acquired during manned helicopter inspections. The videos constituting this database contain several thousand frames representing both medium and high voltage power transmission lines in the infra-red (IR) and visible spectra. The obtained results show that the proposed strategy can reduce the large amount of data present in the inspection videos to a few Key Frames for each tower. It is also demonstrated that the learning-based approach proposed in PoLIS is appropriate for detecting electric towers, a process which is made faster and more robust by coupling it with a tower tracking algorithm. A Graphical User Interface allowing the application of PoLIS to user-provided videos is also presented in this paper, illustrating the whole process and the automated generation of an inspection report.

ACS Style

Carol Martinez; Carlos Sampedro; Aneesh Chauhan; Jean François Collumeau; Pascual Campoy. The Power Line Inspection Software (PoLIS): A versatile system for automating power line inspection. Engineering Applications of Artificial Intelligence 2018, 71, 293 -314.

AMA Style

Carol Martinez, Carlos Sampedro, Aneesh Chauhan, Jean François Collumeau, Pascual Campoy. The Power Line Inspection Software (PoLIS): A versatile system for automating power line inspection. Engineering Applications of Artificial Intelligence. 2018; 71 ():293-314.

Chicago/Turabian Style

Carol Martinez; Carlos Sampedro; Aneesh Chauhan; Jean François Collumeau; Pascual Campoy. 2018. "The Power Line Inspection Software (PoLIS): A versatile system for automating power line inspection." Engineering Applications of Artificial Intelligence 71, no. : 293-314.

Conference paper
Published: 01 January 2018 in Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Reads 0
Downloads 0
ACS Style

Stephan Manthe; Adrian Carrio; Frank Neuhaus; Pascual Campoy; Dietrich Paulus. Combining 2D to 2D and 3D to 2D Point Correspondences for Stereo Visual Odometry. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications 2018, 455 -463.

AMA Style

Stephan Manthe, Adrian Carrio, Frank Neuhaus, Pascual Campoy, Dietrich Paulus. Combining 2D to 2D and 3D to 2D Point Correspondences for Stereo Visual Odometry. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. 2018; ():455-463.

Chicago/Turabian Style

Stephan Manthe; Adrian Carrio; Frank Neuhaus; Pascual Campoy; Dietrich Paulus. 2018. "Combining 2D to 2D and 3D to 2D Point Correspondences for Stereo Visual Odometry." Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications , no. : 455-463.