scispace - formally typeset
Search or ask a question
Author

Jonghyuk Kim

Bio: Jonghyuk Kim is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Inertial navigation system & Simultaneous localization and mapping. The author has an hindex of 20, co-authored 85 publications receiving 2058 citations. Previous affiliations of Jonghyuk Kim include University of Sydney & Beihang University.


Papers
More filters
Proceedings ArticleDOI
14 Oct 2008
TL;DR: A nonlinear complementary filter is proposed that combines accelerometer output for low frequency attitude estimation with integrated gyrometer output for high frequency estimation that is evaluated against the output from a full GPS/INS that was available for the data set.
Abstract: This paper considers the question of using a nonlinear complementary filter for attitude estimation of fixed-wing unmanned aerial vehicle (UAV) given only measurements from a low-cost inertial measurement unit. A nonlinear complementary filter is proposed that combines accelerometer output for low frequency attitude estimation with integrated gyrometer output for high frequency estimation. The raw accelerometer output includes a component corresponding to airframe acceleration, occurring primarily when the aircraft turns, as well as the gravitational acceleration that is required for the filter. The airframe acceleration is estimated using a simple centripetal force model (based on additional airspeed measurements), augmented by a first order dynamic model for angle-of-attack, and used to obtain estimates of the gravitational direction independent of the airplane manoeuvres. Experimental results are provided on a real-world data set and the performance of the filter is evaluated against the output from a full GPS/INS that was available for the data set.

488 citations

Proceedings ArticleDOI
10 Nov 2003
TL;DR: Results show that both the map and the vehicle uncertainty are corrected even though the model of the system and observation are highly non-linear, but indicate that further work of observability and the relationship between vehicle model drift and the number and the location of landmarks need to be further analysed.
Abstract: This paper presents results of the application of simultaneous localisation and map building (SLAM) for an uninhabited aerial vehicle (UAV). Single vision camera and inertial measurement unit (IMU) are installed in a UAV platform. The data taken from a flight test is used to run the SLAM algorithm. Results show that both the map and the vehicle uncertainty are corrected even though the model of the system and observation are highly non-linear. The results, however, also indicate that further work of observability and the relationship between vehicle model drift and the number and the location of landmarks need to be further analysed given the highly dynamic nature of the system.

196 citations

Journal ArticleDOI
TL;DR: The algorithm is known as simultaneous localisation and mapping (SLAM) and it is a terrain aided navigation system (TANS) which has the capability for online map building, and simultaneously utilising the generated map to bound the errors in the navigation solution.
Abstract: We address the issue of autonomous navigation, that is, the ability for a navigation system to provide information about the states of a vehicle without the need for a priori infrastructure such as GPS, beacons, or a map. The algorithm is known as simultaneous localisation and mapping (SLAM) and it is a terrain aided navigation system (TANS) which has the capability for online map building, and simultaneously utilising the generated map to bound the errors in the navigation solution. Since the algorithm does not require any a priori terrain information or initial knowledge of the vehicle location, it presents a powerful navigation augmentation system or more importantly, it can be implemented as an independent navigation system. Results are first provided using computer simulation which analyses the effect of the spatial density of landmarks as well as the quality of observation and inertial navigation data, and then finally the real time implementation of the algorithm on an unmanned aerial vehicle (UAV).

192 citations

Journal ArticleDOI
TL;DR: This paper addresses some challenges to the real-time implementation of Simultaneous Localisation and Mapping (SLAM) on a UAV platform using an Extended Kalman Filter (EKF), which fuses data from an Inertial Measurement Unit (IMU) with data from a passive vision system.

182 citations

Proceedings Article
01 Jan 2003
TL;DR: The real-time flight test results show that the vehicle can perform the autonomous flight reliably even under high maneuvering scenarios.
Abstract: Applying low-cost sensors for the Guidance, Navigation and Control (GNC) of an autonomous Uninhibited Aerial Vehicle (UAV) is an extremely challenging area. This paper presents the real-time results of applying a low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) receiver for the GNC. The INS/GPS navigation loop provides continuous and reliable navigation solutions to the guidance and flight control loop for autonomous flight. With additional air data and engine thrust data, the guidance loop computes the guidance demands to follow way-point scenarios. The flight control loop generates actuator signals for the control surfaces and thrust vector. The whole GNC algorithm was implemented within an embedded flight control computer. The real-time flight test results show that the vehicle can perform the autonomous flight reliably even under high maneuvering scenarios.

130 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: The first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches is presented.
Abstract: We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera

3,772 citations

Journal ArticleDOI
TL;DR: This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method.
Abstract: This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method. While there are still many practical issues to overcome, especially in more complex outdoor environments, the general SLAM method is now a well understood and established part of robotics. Another part of the tutorial summarized more recent works in addressing some of the remaining issues in SLAM, including computation, feature representation, and data association

3,760 citations

Journal ArticleDOI
TL;DR: This paper discusses the recursive Bayesian formulation of the simultaneous localization and mapping (SLAM) problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained.
Abstract: This paper discusses the recursive Bayesian formulation of the simultaneous localization and mapping (SLAM) problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. The paper focuses on three key areas: computational complexity; data association; and environment representation

2,429 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations