scispace - formally typeset
Search or ask a question

Showing papers on "Monocular vision published in 2013"


Journal ArticleDOI
TL;DR: This article provides a concise summary of the work on achieving the first onboard vision-based power-on-and-go system for autonomous MAV flights, and discusses the insights on the lessons learned throughout the different stages of this research.
Abstract: The recent technological advances in Micro Aerial Vehicles (MAVs) have triggered great interest in the robotics community, as their deployability in missions of surveillance and reconnaissance has now become a realistic prospect. The state of the art, however, still lacks solutions that can work for a long duration in large, unknown, and GPS-denied environments. Here, we present our visual pipeline and MAV state-estimation framework, which uses feeds from a monocular camera and an Inertial Measurement Unit (IMU) to achieve real-time and onboard autonomous flight in general and realistic scenarios. The challenge lies in dealing with the power and weight restrictions onboard a MAV while providing the robustness necessary in real and long-term missions. This article provides a concise summary of our work on achieving the first onboard vision-based power-on-and-go system for autonomous MAV flights. We discuss our insights on the lessons learned throughout the different stages of this research, from the conception of the idea to the thorough theoretical analysis of the proposed framework and, finally, the real-world implementation and deployment. Looking into the onboard estimation of monocular visual odometry, the sensor fusion strategy, the state estimation and self-calibration of the system, and finally some implementation issues, the reader is guided through the different modules comprising our framework. The validity and power of this framework are illustrated via a comprehensive set of experiments in a large outdoor mission, demonstrating successful operation over flights of more than 360 m trajectory and 70 m altitude change.

247 citations


Journal ArticleDOI
TL;DR: The efficiency of the presented monocular vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.
Abstract: In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter "H" surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter "H". The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.

163 citations


Journal ArticleDOI
TL;DR: An affirmative answer to the question of whether V‐ INSs can be used to sustain prolonged real‐world GPS‐denied flight is provided by presenting a V‐INS that is validated through autonomous flight‐tests over prolonged closed‐loop dynamic operation in both indoor and outdoor GPS‐ denied environments with two rotorcraft unmanned aircraft systems (UASs).
Abstract: GPS-denied closed-loop autonomous control of unstable Unmanned Aerial Vehicles (UAVs) such as rotorcraft using information from a monocular camera has been an open problem. Most proposed Vision aided Inertial Navigation Systems (V-INSs) have been too computationally intensive or do not have sufficient integrity for closed-loop flight. We provide an affirmative answer to the question of whether V-INSs can be used to sustain prolonged real-world GPS-denied flight by presenting a V-INS that is validated through autonomous flight-tests over prolonged closed-loop dynamic operation in both indoor and outdoor GPS-denied environments with two rotorcraft unmanned aircraft systems (UASs). The architecture efficiently combines visual feature information from a monocular camera with measurements from inertial sensors. Inertial measurements are used to predict frame-to-frame transition of online selected feature locations, and the difference between predicted and observed feature locations is used to bind in real-time the inertial measurement unit drift, estimate its bias, and account for initial misalignment errors. A novel algorithm to manage a library of features online is presented that can add or remove features based on a measure of relative confidence in each feature location. The resulting V-INS is sufficiently efficient and reliable to enable real-time implementation on resource-constrained aerial vehicles. The presented algorithms are validated on multiple platforms in real-world conditions: through a 16-min flight test, including an autonomous landing, of a 66 kg rotorcraft UAV operating in an unconctrolled outdoor environment without using GPS and through a Micro-UAV operating in a cluttered, unmapped, and gusty indoor environment. © 2013 Wiley Periodicals, Inc.

159 citations


Proceedings ArticleDOI
06 May 2013
TL;DR: This paper proposes a vision-based state estimation approach that does not drift when the vehicle remains stationary and shows indoor experimental results with performance benchmarking and illustrates the autonomous operation of the system in challenging indoor and outdoor environments.
Abstract: In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of vision-based state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.

118 citations



Journal ArticleDOI
TL;DR: 2 new approaches aimed at recovering visual function in adults with amblyopia are discussed, one of which is a binocular approach and the other involves the use of well-established noninvasive brain stimulation techniques to temporarily alter the balance of excitation and inhibition in the visual cortex.
Abstract: The current approach to the treatment of amblyopia is problematic for a number of reasons. First, it promotes recovery of monocular vision but because it is not designed to promote binocularity, its binocular outcomes often are disappointing. Second, compliance is poor and variable. Third, the effectiveness of the treatment is thought to decrease with increasing age. We discuss 2 new approaches aimed at recovering visual function in adults with amblyopia. The first is a binocular approach to amblyopia treatment that is showing promise in initial clinical studies. The second is still in development and involves the use of well-established noninvasive brain stimulation techniques to temporarily alter the balance of excitation and inhibition in the visual cortex.

58 citations


Journal ArticleDOI
TL;DR: In this article, the effects of the wearer's pupil size and spherical aberration on visual performance with center-near, aspheric multifocal contact lenses (MFCLs) were evaluated.

54 citations


01 Jan 2013
TL;DR: In this paper, a spaceborne monocular vision-based navigation system for on-orbit-servicing and formation-flying applications is proposed, which estimates the pose of a passive space resident object using its known three-dimensional model and single low-resolution two-dimensional images collected on-board the active spacecraft.
Abstract: This paper addresses the preliminary design of a spaceborne monocular vision-based navigation system for on-orbit-servicing and formation-flying applications. The aim is to estimate the pose of a passive space resident object using its known three-dimensional model and single low-resolution two-dimensional images collected on-board the active spacecraft. In contrast to previous work, no supportive means are available on the target satellite (e.g., light emitting diodes) and no a-priori knowledge of the relative position and attitude is available (i.e., lost-in-space scenario). Three fundamental mechanisms – perceptual organisation, true perspective projection, and random sample consensus – are exploited to overcome the limitations of monocular passive optical navigation in space. The preliminary design is conducted and validated making use of actual images collected in the frame of the PRISMA mission at about 700 km altitude and 10 m inter-spacecraft separation.

51 citations


Patent
11 Sep 2013
TL;DR: In this article, a monocular natural vision landmark assisted mobile robot positioning method is proposed, which comprises the following steps: establishing an online image rapid matching frame based on combination of GIST global features and SURF local features, and meanwhile correcting the vehicle course by combining the motion estimation algorithm based on the monocular vision; finally, effectively fusing the positioning information acquired through vision landmark matching and the positioning acquired through the inertial navigation system by utilizing Kalman filtering.
Abstract: The invention discloses a monocular natural vision landmark assisted mobile robot positioning method. The method comprises the following steps: establishing a natural vision landmark feature library at multiple positions in a navigation environment in advance; matching the acquired monocular image and the vision landmark in the library through a robot in the positioning process by utilizing an inertial navigation system; establishing an online image rapid matching frame based on combination of GIST global features and SURF local features, and meanwhile correcting the vehicle course by combining the motion estimation algorithm based on the monocular vision; finally, effectively fusing the positioning information acquired through vision landmark matching and the positioning information acquired through the inertial navigation system by utilizing Kalman filtering. According to the method, positioning precision and robustness are high under the condition that the global position system (GPS) is limited, and the inertial navigation error caused by noise can be effectively corrected; the operation amount is greatly reduced by employing the monocular vision.

40 citations


Journal ArticleDOI
01 Jun 2013-Optik
TL;DR: A novel absolute localization estimation method based on monocular vision using mapping relationship between projection points and their corresponding points in the image, and an algorithm for error amending is presented.

29 citations


Proceedings ArticleDOI
09 Jul 2013
TL;DR: This paper presents a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments and evaluates the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system.
Abstract: In this paper, we present a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments. The major drawback of monocular systems is that the depth scale of the scene can not be determined without prior knowledge or other sensors. To address this problem, we minimize a cost function consisting of a drift-free altitude measurement and up-to-scale position estimate obtained using the visual sensor. We evaluate the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system. All resources including source code, tutorial documentation and system models are available online4.

Journal ArticleDOI
TL;DR: The technique described in this paper enables a walking robot with a monocular vision system (single camera) to obtain precise estimates of its pose with regard to the six degrees of freedom, essential in search and rescue missions in collapsed buildings, polluted industrial plants, etc.
Abstract: Purpose – The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured environments. The technique described in this paper enables a walking robot with a monocular vision system (single camera) to obtain precise estimates of its pose with regard to the six degrees of freedom. This capability is essential in search and rescue missions in collapsed buildings, polluted industrial plants, etc.Design/methodology/approach – The Parallel Tracking and Mapping (PTAM) algorithm and the Inertial Measurement Unit (IMU) are used to determine the 6‐d.o.f. pose of a walking robot. Bundle‐adjustment‐based tracking and structure reconstruction are applied to obtain precise camera poses from the monocular vision data. The inclination of the robot's platform is determined by using IMU. The self‐localization system is used together with the RRT‐based motion planner, which allows to walk autonomously on rough, previ...


Journal ArticleDOI
TL;DR: The reduced vision-specific HRQOL of monocular patients on the National Eye Institute Visual Function Questionnaire indicates that there are substantial residual visual deficits even after prolonged monocular status.
Abstract: On many tasks, binocular performance is better than monocular performance for binocular individuals1. Although the monocular performance of patients who have lost vision in an eye is often equal to or better than the monocular vision of binocular individuals, it is seldom equal to the binocular performance of individuals who have two eyes with normal vision2. Despite the visual deficits and the psychological consequences, monocular vision loss is seldom viewed as a disability requiring special rehabilitation3-5. The study of monocular individuals has been of scientific interest for two reasons. First by comparing the performance of monocular individuals to those with normal and abnormal binocular vision one can gain an understanding of the role of binocular vision in normal vision e.g., 2, (Schwartz, et al. Invest Ophthalmol Vis Sci 29 (suppl.):434, 1988). Second, comparisons of persons who lost sight in one eye early in life versus later in life shed light on the role of neural reorganization in the shaping of visual performance e.g., 2, (Schwartz, et al. Invest Ophthalmol Vis Sci 28 (suppl.):304, 1987). The surgical removal of an eye can result in depression, difficulties in driving, perceived problems with physical appearance and coping difficulties6-8. Some patients experience hallucinations associated with loss of vision, and for a small portion of patients, their hallucinations can be debilitating 9-11. In some patients the missing eye also gives rise to phantom pain and headaches 12, 13. Personal accounts of the difficulties of losing an eye and associated coping strategies are useful in understanding these phenomena 4, 15-16. In the absence of formal treatment strategies, the psychological and sensory issues involved require sensitivity on the part of the professionals involved in the treatment of the loss of an eye and the subsequent rehabilitation17, 18. Several studies have evaluated patients undergoing enucleation for ocular melanoma 6-8, 19-21, but these studies compared the patients undergoing enucleation to patients undergoing other therapies for ocular melanoma rather to binocular control groups. Two studies which evaluated the functional impact and recovery from acquired monocular vision and sampled a more diverse population of monocular patients 22, 23 employed custom rather than validated questionnaires to evaluate the function and recovery of their patients. These studies, also, did not explicitly compare their monocular patients to binocular normals. To the best of our knowledge only two recent studies have compared the quality of life of binocular normals and patients who have had surgery to remove an eye24, 25. Both found statistically significant differences between normals and those with monocular vision following surgery. This study was conducted to quantify long-term differences in the health-related quality of life of patients after surgical removal of an eye as compared to normal patients. We used an established quantitative general health questionnaire, the Medical Outcomes Study Short Form 12 question (SF-12)26,27 and a vision specific instrument, the National Eye Institute Visual Function Questionnaire (NEI VFQ)28,29, to compare the quality of life of patients having acquired monocular vision and surgical removal of an eye with that of a similarly aged control group with binocular vision.

Journal ArticleDOI
TL;DR: The results show that the precise autonomous navigation of unmanned aircrafts is achieved by the vision-based TRN algorithm, replaced with a monocular vision system.
Abstract: A vision-based terrain referenced navigation (TRN) system is addressed for autonomous navigation of unmanned aerial vehicles (UAVs). A typical TRN algorithm blends inertial navigation data with measured terrain information to estimate vehicle's position. In this paper, however, we replace the low-cost inertial navigation system (INS) with a monocular vision system. The homography decomposition algorithm is utilized to estimates the relative translational motion using features on the ground with simple assumptions. A numerical integration point-mass filter based on Bayesian estimation is employed to combine the translation information obtained from the vision system with the measured terrain height. Numerical simulations are constructed to evaluate the performance of the proposed method. The results show that the precise autonomous navigation of unmanned aircrafts is achieved by the vision-based TRN algorithm.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: A vision-based system for vehicle localization and tracking for detecting partially visible vehicles is presented, which demonstrates the effectiveness of the proposed system on a multilane highway dataset containing instances of vehicles with relative motion to the ego-vehicle.
Abstract: Vehicle detection is a key problem in computer vision, with applications in driver assistance and active safety. A challenging aspect of the problem is the common occlusion of vehicles in the scene. In this paper, we present a vision-based system for vehicle localization and tracking for detecting partially visible vehicles. Consequently, vehicles are localized more reliably and tracked for longer periods of time. The proposed system detects vehicles using an active-learning based monocular vision approach and motion (optical flow) cues. A calibrated stereo rig is utilized to acquire a depth map, and consequently the real-world coordinates of each detected vehicle. Tracking is performed using a Kalman filter. The tracking is formulated to integrate stereo-monocular information. We demonstrate the effectiveness of the proposed system on a multilane highway dataset containing instances of vehicles with relative motion to the ego-vehicle.

Journal ArticleDOI
TL;DR: A real-time vision-based localization approach for humanoid robots using a single camera as the only sensor using stereo visual SLAM techniques based on non-linear least squares optimization methods, which can solve very efficiently the Perspective-n-Point (PnP) problem.
Abstract: In this paper, we propose a real-time vision-based localization approach for humanoid robots using a single camera as the only sensor. In order to obtain an accurate localization of the robot, we first build an accurate 3D map of the environment. In the map computation process, we use stereo visual SLAM techniques based on non-linear least squares optimization methods (bundle adjustment). Once we have computed a 3D reconstruction of the environment, which comprises of a set of camera poses (keyframes) and a list of 3D points, we learn the visibility of the 3D points by exploiting all the geometric relationships between the camera poses and 3D map points involved in the reconstruction. Finally, we use the prior 3D map and the learned visibility prediction for monocular vision-based localization. Our algorithm is very efficient, easy to implement and more robust and accurate than existing approaches. By means of visibility prediction we predict for a query pose only the highly visible 3D points, thus, speeding up tremendously the data association between 3D map points and perceived 2D features in the image. In this way, we can solve very efficiently the Perspective-n-Point (PnP) problem providing robust and fast vision-based localization. We demonstrate the robustness and accuracy of our approach by showing several vision-based localization experiments with the HRP-2 humanoid robot.

Patent
19 Jun 2013
TL;DR: In this article, a portable ball target is used by the monocular vision system and a measuring method is provided for obtaining depth information under a monocular measuring system, where the camera can be fast calibrated according to a single image of each ball body part of the target.
Abstract: The invention provides a monocular vision system, a portable ball target used by the monocular vision system and a measuring method of the monocular vision system The ball target is composed of ball bodies, a connecting block and a series of probes with different shapes and lengths Each probe is composed of an extension bar and a measuring head The connecting block is used for supporting each ball body and being connected with the probes 14 even threaded holes are drilled in the surface of the ball-shaped connecting block which is connected with extension bars and balls in a threaded mode Different connecting bars are designed to adapt to different measuring occasions Each ball body part is composed of over three non-coplane balls, a single camera can shoot a picture so that intrinsic parameters of the camera can be fast calibrated and ball center positions can be determined so as to obtain a three-dimensional coordinate of a point to be detected, and a feasible approach is provided for obtaining depth information under a monocular measuring system After the camera moves, external parameters of the camera can be fast calibrated according to a single image of each ball body part of the ball target, the spatial data splicing purpose is achieved, and the measuring range is effectively enlarged and the measuring speed are effectively accelerated

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work proposes a drivable region detection algorithm that generates the region of interest from a dynamic threshold search method and from a drag process, making the concept unique.
Abstract: Camera-based estimation of drivable image areas is still in evolution. These systems have been developed for improved safety and convenience, without the need to adapt itself to the environment. Machine Vision is an important tool to identify the region that includes the road in images. Road detection is the major task of autonomous vehicle guidance. In this way, this work proposes a drivable region detection algorithm that generates the region of interest from a dynamic threshold search method and from a drag process (DP). Applying the DP to estimation of drivable image areas has not been done yet, making the concept unique. Our system was has been evaluated from real data obtained by intelligent platforms and tested in different types of image texture, which include occlusion case, obstacle detection and reactive navigation.

Journal ArticleDOI
02 Apr 2013-PLOS ONE
TL;DR: Vision with the two eyes improves postural control for both viewing distances and for both types of strabismus, due to complementary mechanisms: larger visual field, better quality of fixation and vergence angle due to the use of visual inputs from both eyes.
Abstract: Vision is important for postural control as is shown by the Romberg quotient (RQ): with eyes closed, postural instability increases relative to eyes open (RQ = 2). Yet while fixating at far distance, postural stability is similar with eyes open and eyes closed (RQ = 1). Postural stability can be better with both eyes viewing than one eye, but such effect is not consistent among healthy subjects. The first goal of the study is to test the RQ as a function of distance for children with convergent versus divergent strabismus. The second goal is to test whether vision from two eyes relative to vision from one eye provides better postural stability. Thirteen children with divergent strabismus and eleven with convergent strabismus participated in this study. Posturtography was done with the Techno concept device. Experiment 1, four conditions: fixation at 40 cm and at 200 cm both with eyes open and eyes covered (evaluation of RQ). Experiment 2, six conditions: fixation at 40 cm and at 200 cm, with both eyes viewing or under monocular vision (dominant and non-dominant eye). For convergent strabismus, the groups mean value of RQ was 1.3 at near and 0.94 at far distance; for divergent, it was 1.06 at near and 1.68 at far. For all children, the surface of body sway was significantly smaller under both eyes viewing than monocular viewing (either eye). Increased RQ value at near for convergent and at far for divergent strabismus is attributed to the influence of the default strabismus angle and to better use of ocular motor signals. Vision with the two eyes improves postural control for both viewing distances and for both types of strabismus. Such benefit can be due to complementary mechanisms: larger visual field, better quality of fixation and vergence angle due to the use of visual inputs from both eyes.

Patent
25 Sep 2013
TL;DR: In this paper, a method for designing a monocular vision odometer with a light stream method and a feature point matching method integrated is presented, which can provide accurate real-time positioning output and has robustness under the condition that illumination variations and road surface textures are few.
Abstract: The invention discloses a method for designing a monocular vision odometer with a light stream method and a feature point matching method integrated. Accurate real-time positioning is of great significance to an autonomous navigation system. Positioning based on the SURF feature point matching method has the advantages of being robust for illumination variations and high in positioning accuracy, and the defects of the SURF feature point matching method are that the processing speed is low and real-time positioning can not be achieved. The light steam tracking method has good real-time performance, and the defect of the light steam tracking method is that positioning accuracy is poor. The method integrates the advantages of the two methods, and the monocular vision odometer integrated with the light stream method and the feature point matching method is designed. Experimental results show that the algorithm after integration can provide accurate real-time positioning output and has robustness under the condition that illumination variations and road surface textures are few.




Patent
10 Apr 2013
TL;DR: In this paper, a plane measuring method based on the monocular vision was proposed, which includes placing a plane target on a measured plane, selecting a fixed point on the plane target for camera calibration, and obtaining necessary parameters for conversion from a world coordinate system to image coordinate system.
Abstract: The invention provides a plane measuring method and a plane measuring device based on the monocular vision. The plane measuring method includes placing a plane target on a measured plane, selecting a fixed point on the plane target for camera calibration, and obtaining necessary parameters for conversion from a world coordinate system to an image coordinate system. Through utilization of a conversion relationship between the world coordinate system and the image coordinate system, obtaining a position coordinate of an object feature point in the world coordinate system, and further obtaining the length value of an object in the world coordinate system.

Patent
04 Sep 2013
TL;DR: In this paper, a variable-focus monocular and binocular vision sensing device comprising a mirror image type optical system, a high-precision bearing holder, an image acquisition system, and a digital information processing device and other parts is presented.
Abstract: The invention provides a variable-focus monocular and binocular vision sensing device comprising a mirror image type optical system, a high-precision bearing holder, an image acquisition system, a digital information processing device and other parts. The variable-focus monocular and binocular vision sensing device can guarantee that a variable-focus image sensor (a CCD (Charge Coupled Device) or an analog video camera) mounted on the high-precision bearing holder can be accurately located at a certain required monocular vision preset station, and can automatically regulate the focus to the required value so as to perform monocular vision measurement or guidance work; the variable-focus monocular and binocular vision sensing device also can accurately locate the image sensor at the unique determined binocular vision preset station, and can automatically regulate focus to the required value, and thus, the image sensor and the mirror image type optical system jointly form a binocular vision sensing system, and sequentially, the function of binocular vision guidance or measurement is realized; and moreover, the variable-focus monocular and binocular vision sensing device can realize automatic switching between a monocular vision station and a binocular vision station through programming of the digital information processing device according to the task requirements, so as to realize the functions such as monocular and binocular vision measurement, guidance or obstacle avoidance.

Proceedings ArticleDOI
06 May 2013
TL;DR: This paper addresses platooning navigation as part of new transportation services emerging nowadays in urban areas using a visual SLAM algorithm that relies on monocular vision to be run on the lead vehicle and coupled with a trajectory creation procedure.
Abstract: This paper addresses platooning navigation as part of new transportation services emerging nowadays in urban areas. Platooning formation is ensured using a global decentralized control strategy supported by inter-vehicle communications. A large motion flexibility is achieved according to a manual guidance mode, i.e. the path to follow is inferred online from the motion of the manually driven first vehicle. For this purpose, a visual SLAM algorithm that relies on monocular vision is run on the lead vehicle and coupled with a trajectory creation procedure. Both the map and trajectory updates are shared online with the following vehicles and permit them to derive their absolute location with respect to a common reference trajectory from their current camera image. Full-scale experiments with two urban vehicles demonstrate the performance of the proposed approach.

Proceedings ArticleDOI
28 May 2013
TL;DR: A real-time monocular vision solution for MAVs to autonomously search for and land on an arbitrary landing site using a well-known visual SLAM algorithm that enables autonomous navigation of the MAV in unknown environments.
Abstract: This paper presents a real-time monocular vision solution for MAVs to autonomously search for and land on an arbitrary landing site. The autonomous MAV is provided with only one single reference image of the landing site with an unknown size before initiating this task. To search for such landing sites, we extend a well-known visual SLAM algorithm that enables autonomous navigation of the MAV in unknown environments. A multi-scale ORB feature based method is implemented and integrated into the SLAM framework for landing site detection. We use a RANSAC-based method to locate the landing site within the map of the SLAM system, taking advantage of those map points associated with the detected landing site. We demonstrate the efficiency of the presented vision system in autonomous flight, and compare its accuracy with ground truth data provided by an external tracking system.

Proceedings ArticleDOI
06 May 2013
TL;DR: A novel vision-based method for reliable human detection from vehicles operating in industrial environments in the vicinity of workers by exploiting the fact that reflective vests represent a standard safety equipment on most industrial worksites.
Abstract: We report on a novel vision-based method for reliable human detection from vehicles operating in industrial environments in the vicinity of workers. By exploiting the fact that reflective vests represent a standard safety equipment on most industrial worksites, we use a single camera system and active IR illumination to detect humans by identifying the reflective vest markers. Adopting a sparse feature based approach, we classify vest markers against other reflective material and perform supervised learning of the object distance based on local image descriptors. The integration of the resulting per-feature 3D position estimates in a particle filter finally allows to perform human tracking in conditions ranging from broad daylight to complete darkness.

Patent
20 Mar 2013
TL;DR: In this paper, a calibration method of pose position-free constraint line laser monocular vision three-dimensional measurement sensor parameters is presented, which is based on the invariance of a vector quantity cross product direction set composed of the imaging points of target feature points in any shooting pose position.
Abstract: The invention provides a calibration method of pose position-free constraint line laser monocular vision three-dimensional measurement sensor parameters and belongs to the technical fields of optical measurement and mechanical engineering technologies. After basic data are extracted, imaging points are sequenced according to the invariance of a vector quantity cross product direction set composed of the imaging points of target feature points in any shooting pose position, and corresponding relation between the target feature points and the imaging points is built. Light plane feature points are extracted through quadrilateral intersection composed of a light knife central point set fitting straight lines and the target feature points, and a three-dimensional coordinate is calculated according to an intersection invariance principle. Inner and outer diameters in a monocular vidicon and light plane diameters of a line laser projector are optically fitted and calculated, and accurate calibration of the line laser monocular vision three-dimensional measurement sensor parameters is achieved. In calibration, a target can freely move and rotate completely, pose position placing is free of restraint, the same group of calibration images is shared for calibration of two parts of diameters, the calibration accuracy of a three-dimensional measurement sensor is ensured, and simultaneously calibration steps are simplified.