scispace - formally typeset
Search or ask a question
Author

A. Cesetti

Bio: A. Cesetti is an academic researcher from Marche Polytechnic University. The author has contributed to research in topics: Mobile robot & Machine vision. The author has an hindex of 7, co-authored 12 publications receiving 346 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Results show the appropriateness of the vision-based approach, which is robust to occlusions and light variations, and two algorithms for safe landing area detection, based on a feature optical flow analysis.
Abstract: In this paper a vision-based approach for guidance and safe landing of an Unmanned Aerial Vehicle (UAV) is proposed. The UAV is required to navigate from an initial to a final position in a partially known environment. The guidance system allows a remote user to define target areas from a high resolution aerial or satellite image to determine either the waypoints of the navigation trajectory or the landing area. A feature-based image-matching algorithm finds the natural landmarks and gives feedbacks to an onboard, hierarchical, behaviour-based control system for autonomous navigation and landing. Two algorithms for safe landing area detection are also proposed, based on a feature optical flow analysis. The main novelty is in the vision-based architecture, extensively tested on a helicopter, which, in particular, does not require any artificial landmark (e.g., helipad). Results show the appropriateness of the vision-based approach, which is robust to occlusions and light variations.

215 citations

Journal ArticleDOI
TL;DR: This paper proposes vision-based techniques for localizing a UAV based only on visual matching between current view and available georeferenced satellite images and shows sufficient performance if compared with common GPS systems and give a good performance also in the altitude estimation.
Abstract: The combination of photogrammetric aerial and terrestrial recording methods can provide new opportunities for photogrammetric applications. A UAV (Unmanned Aerial Vehicle), in our case a helicopter system, can cover both the aerial and quasi-terrestrial image acquisition methods. A UAV can be equipped with an on-board high resolution camera and a priori knowledge of the operating area where to perform photogrammetric tasks. In this general scenario our paper proposes vision-based techniques for localizing a UAV. Only natural landmarks provided by a feature tracking algorithm will be considered, without the help of visual beacons or landmarks with known positions. The novel idea is to perform global localization, position tracking and localization failure recovery (kidnapping) based only on visual matching between current view and available georeferenced satellite images. The matching is based on SIFT features and the system estimates the position of the UAV and its altitude on the base of the reference image. The vision system replaces the GPS signal combining position information from visual odometry and georeferenced imagery. Georeferenced satellite or aerial images must be available on-board beforehand or downloaded during the flight. The growing availability of high resolution satellite images (e.g., provided by Google Earth or other local information sources) makes this topic very interesting and timely. Experiments with both synthetic (i.e., taken from satellites or datasets and pre elaborated) and real world images have been performed to test the accuracy and the robustness of our method. Results show sufficient performance if compared with common GPS systems and give a good performance also in the altitude estimation, even if in this last case there are only preliminary results.

54 citations

Journal ArticleDOI
TL;DR: A framework for simulation and testing of UAVs in cooperative scenarios is presented, based on modularity and stratification in different specialized layers, which allows an easy switching from simulated to real environments, thus reducing testing and debugging times, especially in a training context.
Abstract: Today, Unmanned Aerial Vehicles (UAVs) have deeply modified the concepts of surveillance, SearchR missions are in most cases performed by a fleet of cooperating autonomous and heterogeneous vehicles. These systems are really complex and it becomes fundamental to simulate any mission stage to exploit benefits of simulations like repeatability, modularity and low cost. In this paper a framework for simulation and testing of UAVs in cooperative scenarios is presented. The framework, based on modularity and stratification in different specialized layers, allows an easy switching from simulated to real environments, thus reducing testing and debugging times, especially in a training context. Results obtained using the proposed framework on some test cases are also reported.

24 citations

Proceedings ArticleDOI
24 Jun 2009
TL;DR: Results show the appropriateness of the vision-based approach that does not require any artificial landmark and is quite robust to occlusions, light variations and seasonal changes (e.g., brown or green leaves).
Abstract: This paper presents the design and implementation of a vision-based navigation and landing algorithm for an autonomous helicopter. The vision system allows to define target areas from a high resolution aerial or satellite image to determine the waypoints of the navigation trajectory or the landing area. The helicopter is required to navigate from an initial position to a final position in a partially known environment using GPS and vision, to locate a landing target (a helipad of a known shape or a natural landmark) and to land on it. The vision system, using a feature-based image matching algorithm, finds the area and gives feedbacks to the control system for autonomous landing. Vision is used for accurate target detection, recognition and tracking. The helicopter updates its landing target parameters owing to vision and uses an on board behavior-based controller to follow a path to the landing site. Results show the appropriateness of the vision-based approach that does not require any artificial landmark (e.g., helipad) and is quite robust to occlusions, light variations and seasonal changes (e.g., brown or green leaves).

22 citations

Proceedings ArticleDOI
23 Jun 2010
TL;DR: The design and implementation of a control system for autonomous navigation based on a Service Oriented Architecture (SOA) supporting a mobile robot suitable for industrial applications and shows good real-time performances.
Abstract: This paper presents the design and implementation of a control system for autonomous navigation based on a Service Oriented Architecture (SOA) supporting a mobile robot suitable for industrial applications The robot is required to perform generic high-level tasks in indoor structured environments The control architecture, developed within the Microsoft Robotics Developer Studio (MRDS), allows to encapsulate the controller functionalities as a set of services that interact and exchange data among them This approach allows to guarantee flexibility, scalability and reliability The accurate testing, carried out in simulated and real environments, shows good real-time performances

21 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The main objective of this paper is to present a comprehensive survey of RUAS research that captures all seminal works and milestones in each GNC area, with a particular focus on practical methods and technologies that have been demonstrated in flight tests.
Abstract: Recently, there has been growing interest in developing unmanned aircraft systems (UAS) with advanced onboard autonomous capabilities. This paper describes the current state of the art in autonomous rotorcraft UAS (RUAS) and provides a detailed literature review of the last two decades of active research on RUAS. Three functional technology areas are identified as the core components of an autonomous RUAS. Guidance, navigation, and control (GNC) have received much attention from the research community, and have dominated the UAS literature from the nineties until now. This paper first presents the main research groups involved in the development of GNC systems for RUAS. Then it describes the development of a framework that provides standard definitions and metrics characterizing and measuring the autonomy level of a RUAS using GNC aspects. This framework is intended to facilitate the understanding and the organization of this survey paper, but it can also serve as a common reference for the UAS community. The main objective of this paper is to present a comprehensive survey of RUAS research that captures all seminal works and milestones in each GNC area, with a particular focus on practical methods and technologies that have been demonstrated in flight tests. These algorithms and systems have been classified into different categories and classes based on the autonomy level they provide and the algorithmic approach used. Finally, the paper discusses the RUAS literature in general and highlights challenges that need to be addressed in developing autonomous systems for unmanned rotorcraft. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.

605 citations

Journal ArticleDOI
TL;DR: The SIFT descriptor has been proven to be very useful in practice for robust image matching and object recognition under real-world conditions and has also been extended from grey-level to colour images and from 2-D spatial images to 2+1-D spatio-temporal video.
Abstract: Scale Invariant Feature Transform (SIFT) is an image descriptor for image-based matching developed by David Lowe (1999,2004). This descriptor as well as related image descriptors are used for a large number of purposes in computer vision related to point matching between different views of a 3-D scene and view-based object recognition. The SIFT descriptor is invariant to translations, rotations and scaling transformations in the image domain and robust to moderate perspective transformations and illumination variations. Experimentally, the SIFT descriptor has been proven to be very useful in practice for robust image matching and object recognition under real-world conditions.In its original formulation, the SIFT descriptor comprised a method for detecting interest points from a grey-level image at which statistics of local gradient directions of image intensities were accumulated to give a summarizing description of the local image structures in a local neighbourhood around each interest point, with the intention that this descriptor should be used for matching corresponding interest points between different images. Later, the SIFT descriptor has also been applied at dense grids (dense SIFT) which have been shown to lead to better performance for tasks such as object categorization and texture classification. The SIFT descriptor has also been extended from grey-level to colour images and from 2-D spatial images to 2+1-D spatio-temporal video.

356 citations

Journal ArticleDOI
04 Jan 2019
TL;DR: This paper aims to explore and analyze the existing studies in the literature related to the different approaches employed in coverage path planning problems, especially those using UAVs, and addresses simple geometric flight patterns and more complex grid-based solutions considering full and partial information about the area of interest.
Abstract: Coverage path planning consists of finding the route which covers every point of a certain area of interest. In recent times, Unmanned Aerial Vehicles (UAVs) have been employed in several application domains involving terrain coverage, such as surveillance, smart farming, photogrammetry, disaster management, civil security, and wildfire tracking, among others. This paper aims to explore and analyze the existing studies in the literature related to the different approaches employed in coverage path planning problems, especially those using UAVs. We address simple geometric flight patterns and more complex grid-based solutions considering full and partial information about the area of interest. The surveyed coverage approaches are classified according to a classical taxonomy, such as no decomposition, exact cellular decomposition, and approximate cellular decomposition. This review also contemplates different shapes of the area of interest, such as rectangular, concave and convex polygons. The performance metrics usually applied to evaluate the success of the coverage missions are also presented.

317 citations

Journal ArticleDOI
TL;DR: Results show the appropriateness of the vision-based approach, which is robust to occlusions and light variations, and two algorithms for safe landing area detection, based on a feature optical flow analysis.
Abstract: In this paper a vision-based approach for guidance and safe landing of an Unmanned Aerial Vehicle (UAV) is proposed. The UAV is required to navigate from an initial to a final position in a partially known environment. The guidance system allows a remote user to define target areas from a high resolution aerial or satellite image to determine either the waypoints of the navigation trajectory or the landing area. A feature-based image-matching algorithm finds the natural landmarks and gives feedbacks to an onboard, hierarchical, behaviour-based control system for autonomous navigation and landing. Two algorithms for safe landing area detection are also proposed, based on a feature optical flow analysis. The main novelty is in the vision-based architecture, extensively tested on a helicopter, which, in particular, does not require any artificial landmark (e.g., helipad). Results show the appropriateness of the vision-based approach, which is robust to occlusions and light variations.

215 citations

Journal ArticleDOI
TL;DR: The efficiency of the presented monocular vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.
Abstract: In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter "H" surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter "H". The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.

163 citations