scispace - formally typeset
Search or ask a question
Author

Giuseppe Loianno

Bio: Giuseppe Loianno is an academic researcher from New York University. The author has contributed to research in topics: Computer science & Inertial measurement unit. The author has an hindex of 28, co-authored 94 publications receiving 2243 citations. Previous affiliations of Giuseppe Loianno include University of Pennsylvania & Information Technology University.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
01 Apr 2017
TL;DR: The use of smartphone grade hardware and the small scale provides an inexpensive and practical solution for autonomous flight in indoor environments with extensive experimental results showing aggressive flights through and around obstacles with large rotation angular excursions and accelerations.
Abstract: We address the state estimation, control, and planning for aggressive flight with a 150 cm diameter, 250 g quadrotor equipped only with a single camera and an inertial measurement unit (IMU). The use of smartphone grade hardware and the small scale provides an inexpensive and practical solution for autonomous flight in indoor environments. The key contributions of this paper are: 1) robust state estimation and control using only a monocular camera and an IMU at speeds of 4.5 m/s, accelerations of over 1.5 g, roll and pitch angles of up to 90 $^\circ$ , and angular rate of up to 800 $^\circ$ /s without requiring any structure in the environment; 2) planning of dynamically feasible three-dimensional trajectories for slalom paths and flights through narrow windows; and 3) extensive experimental results showing aggressive flights through and around obstacles with large rotation angular excursions and accelerations.

275 citations

Journal ArticleDOI
TL;DR: This work verifies the possibility of self-stabilization of multi-MAV groups without an external global positioning system, and deployment of the system in real-world scenarios truthfully verifies its operational constraints.
Abstract: A complex system for control of swarms of micro aerial vehicles (MAV), in literature also called as unmanned aerial vehicles (UAV) or unmanned aerial systems (UAS), stabilized via an onboard visual relative localization is described in this paper. The main purpose of this work is to verify the possibility of self-stabilization of multi-MAV groups without an external global positioning system. This approach enables the deployment of MAV swarms outside laboratory conditions, and it may be considered an enabling technique for utilizing fleets of MAVs in real-world scenarios. The proposed visual-based stabilization approach has been designed for numerous different multi-UAV robotic applications (leader-follower UAV formation stabilization, UAV swarm stabilization and deployment in surveillance scenarios, cooperative UAV sensory measurement) in this paper. Deployment of the system in real-world scenarios truthfully verifies its operational constraints, given by limited onboard sensing suites and processing capabilities. The performance of the presented approach (MAV control, motion planning, MAV stabilization, and trajectory planning) in multi-MAV applications has been validated by experimental results in indoor as well as in challenging outdoor environments (e.g., in windy conditions and in a former pit mine).

130 citations

Journal ArticleDOI
01 Apr 2018
TL;DR: This letter addresses the state estimation, control, and trajectory planning in cooperative transportation of structures, which are either too heavy or too big to be carried by small microvehicles.
Abstract: Micro aerial vehicles have the potential to assist humans in tasks such as manipulation and transportation for construction and humanitarian missions, beyond simply acquiring data and building maps. In this letter, we address the state estimation, control, and trajectory planning in cooperative transportation of structures, which are either too heavy or too big to be carried by small microvehicles. Specifically, we consider small quadrotors, each equipped only with a single camera and inertial measurement unit as a sensor. The key contributions are 1) a new approach to coordinated control, which allows independent control of each vehicle while guaranteeing the system's stability and 2) a new cooperative localization scheme that allows each vehicle to benefit from measurements acquired by other vehicles. The latter relies on the vehicles exploiting the inherent rigid structure information to infer additional constraints between the vehicles’ poses allowing us to formulate the pose estimation problem as an optimization problem on the Lie group $\mathbf {SE(3)}$ . The proposed approach is validated through experimental results with multiple quadrotors grasping and transporting a rigid structure.

129 citations

Journal ArticleDOI
TL;DR: The system design and software architecture of the proposed solution are described and how all the distinct components can be integrated to enable smooth robot operation are showcased.
Abstract: Author(s): Mohta, K.; Mulgaonkar, Y.; Watterson, M.; Liu, S.; Qu, C.; Makineni, A.; Saulnier, K.; Sun, K.; Zhu, A.; Delmerico, J.; Karydis, K.; Atanasov, N.; Loianno, G.; Scaramuzza, D.; Daniilidis, K.; Taylor, C. J.; Kumar, V.

126 citations

Journal ArticleDOI
TL;DR: Multiagent systems have been a major area of research for the last 15 years motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise.
Abstract: Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures.

122 citations


Cited by
More filters
Posted Content
TL;DR: This paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies which are adaptive, distributed, asynchronous, and verifiably correct.
Abstract: This paper presents control and coordination algorithms for groups of vehicles. The focus is on autonomous vehicle networks performing distributed sensing tasks where each vehicle plays the role of a mobile tunable sensor. The paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies. The resulting closed-loop behavior is adaptive, distributed, asynchronous, and verifiably correct.

2,198 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
16 Jan 2017
TL;DR: In this paper, a novel tightly coupled visual-inertial simultaneous localization and mapping system is presented, which is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas.
Abstract: In recent years there have been excellent results in visual-inertial odometry techniques, which aim to compute the incremental motion of the sensor with high accuracy and robustness. However, these approaches lack the capability to close loops and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place. In this letter, we present a novel tightly coupled visual-inertial simultaneous localization and mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas. While our approach can be applied to any camera configuration, we address here the most general problem of a monocular camera, with its well-known scale ambiguity. We also propose a novel IMU initialization method, which computes the scale, the gravity direction, the velocity, and gyroscope and accelerometer biases, in a few seconds with high accuracy. We test our system in the 11 sequences of a recent micro-aerial vehicle public dataset achieving a typical scale factor error of 1% and centimeter precision. We compare to the state-of-the-art in visual-inertial odometry in sequences with revisiting, proving the better accuracy of our method due to map reuse and no drift accumulation.

646 citations

Journal ArticleDOI
TL;DR: The most relevant works from Civil Engineering, Computer Vision, and Robotics communities are presented and compared in terms of their potential to lead to automatic construction monitoring and civil infrastructure condition assessment.
Abstract: Over the past few years, the application of camera-equipped Unmanned Aerial Vehicles (UAVs) for visually monitoring construction and operation of buildings, bridges, and other types of civil infrastructure systems has exponentially grown. These platforms can frequently survey construction sites, monitor work-in-progress, create documents for safety, and inspect existing structures, particularly for hard-to-reach areas. The purpose of this paper is to provide a concise review of the most recent methods that streamline collection, analysis, visualization, and communication of the visual data captured from these platforms, with and without using Building Information Models (BIM) as a priori information. Specifically, the most relevant works from Civil Engineering, Computer Vision, and Robotics communities are presented and compared in terms of their potential to lead to automatic construction monitoring and civil infrastructure condition assessment.

378 citations