scispace - formally typeset
Search or ask a question
Author

Kiyoshi Irie

Other affiliations: Tokyo Institute of Technology
Bio: Kiyoshi Irie is an academic researcher from Chiba Institute of Technology. The author has contributed to research in topics: Mobile robot & Robot. The author has an hindex of 8, co-authored 36 publications receiving 192 citations. Previous affiliations of Kiyoshi Irie include Tokyo Institute of Technology.

Papers
More filters
Proceedings ArticleDOI
03 Dec 2010
TL;DR: A light-weight sensor platform that consists of gyro-assisted odometry and a 3D laser scanner for localization of human-scale robots and successfully navigated the assigned 1-km course in a fully autonomous mode multiple times is proposed.
Abstract: This paper proposes a light-weight sensor platform that consists of gyro-assisted odometry and a 3D laser scanner for localization of human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. Robust and computationally inexpensive localization is implemented on the sensor platform using a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were performed at the Tsukuba Challenge 2009, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km course in a fully autonomous mode multiple times.

50 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: BLSMI is a combination of methods composed of a kernel-based dependence estimator and noise reduction by bootstrap aggregating, which can handle richer features and robustly estimate dependence and performed best in terms of calibration accuracy.
Abstract: The goal of this study is to achieve automatic extrinsic calibration of a camera-LiDAR system that does not require calibration targets. Calibration through maximization of statistical dependence using mutual information (MI) is a promising approach. However, we observed that existing methods perform poorly on outdoor data sets. Because of their susceptibility to noise, objective functions of previous methods tend to be non-smooth, and gradient-based searches fail in local optima. To overcome these issues, we introduce a novel dependence estimator called bagged least-squares mutual information (BLSMI). BLSMI is a combination of methods composed of a kernel-based dependence estimator and noise reduction by bootstrap aggregating (bagging), which can handle richer features and robustly estimate dependence. We compared ours with previous methods using indoor and outdoor data sets, and observed that our method performed best in terms of calibration accuracy. While previous methods showed degraded performance on outdoor data sets because of the local optima problem, our method exhibited high calibration accuracy both on indoor and outdoor data sets.

24 citations

Journal ArticleDOI
TL;DR: This work uses edge-point-based stereo simultaneous localization and mapping to obtain simultaneously occupancy information and robot ego-motion estimation and uses two-dimensional occupancy grid maps generated from three-dimensional point clouds obtained by a stereo camera.
Abstract: We present a mobile robot localization method using only a stereo camera. Vision-based localization in outdoor environments is a challenging issue because of extreme changes in illumination. To cope with varying illumination conditions, we use two-dimensional occupancy grid maps generated from three-dimensional point clouds obtained by a stereo camera. Furthermore, we incorporate salient line segments extracted from the ground into the grid maps. The grid maps are not significantly affected by illumination conditions because occupancy information and salient line segments can be robustly obtained. On the grid maps, a robot's poses are estimated using a particle filter that combines visual odometry and map matching. We use edge-point-based stereo simultaneous localization and mapping to obtain simultaneously occupancy information and robot ego-motion estimation. We tested our method under various illumination and weather conditions, including sunny and rainy days. The experimental results showed the effect...

23 citations

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A novel road recognition method using a single image for mobile robot navigation that exploits digital street maps, the robot position, and prior knowledge of the environment to correct errors in robot position is presented.
Abstract: In this study, we present a novel road recognition method using a single image for mobile robot navigation. Vision-based road recognition in outdoor environments remains a significant challenge. Our approach exploits digital street maps, the robot position, and prior knowledge of the environment. We segment an input image into superpixels, which are grouped into various object classes such as roadway, sidewalk, curb, and wall. We formulate the classification problem as an energy minimization problem and employ graph cuts to estimate the optimal object classes in the image. Although prior information assists recognition, erroneous information can lead to false recognition. Therefore, we incorporate localization into our recognition method to correct errors in robot position. The effectiveness of our method was verified through experiments using real-world urban datasets.

15 citations

Proceedings ArticleDOI
09 May 2011
TL;DR: This work proposes a novel localization method for outdoor mobile robots using High Dynamic Range (HDR) vision technology that generates a set of keypoints that incorporates those detected in each image to match measured positions with a map.
Abstract: We propose a novel localization method for outdoor mobile robots using High Dynamic Range (HDR) vision technology. To obtain an HDR image, multiple images at different exposures is typically captured and combined. However, since mobile robots can be moving during a capture sequence, images cannot be fused easily. Instead, we generate a set of keypoints that incorporates those detected in each image. The position of the robot is estimated using the keypoint sets to match measured positions with a map. We conducted experimental comparisons of HDR and auto-exposure images, and our HDR method showed higher robustness and localization accuracy.

14 citations


Cited by
More filters
Proceedings ArticleDOI
14 May 2012
TL;DR: SeqSLAM as mentioned in this paper calculates the best candidate matching location within every local navigation sequence and localization is then achieved by recognizing coherent sequences of these "local best matches" by removing the need for global matching performance by the vision front-end.
Abstract: Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.

756 citations

Journal Article
TL;DR: A new approach to visual navigation under changing conditions dubbed SeqSLAM, which removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images.
Abstract: Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.

686 citations

Journal ArticleDOI
TL;DR: The requirements for the exploration mission in the Fukushima Daiichi Nuclear Power Plants are presented, the implementation is discussed, and the results of the mission are reported.
Abstract: On March 11, 2011, a massive earthquake (magnitude 9.0) and accompanying tsunami hit the Tohoku region of eastern Japan. Since then, the Fukushima Daiichi Nuclear Power Plants have been facing a crisis due to the loss of all power that resulted from the meltdown accidents. Three buildings housing nuclear reactors were seriously damaged from hydrogen explosions, and, in one building, the nuclear reactions became out of control. It was too dangerous for humans to enter the buildings to inspect the damage because radioactive materials were also being released. In response to this crisis, it was decided that mobile rescue robots would be used to carry out surveillance missions. The mobile rescue robots needed could not be delivered to the Tokyo Electric Power Company (TEPCO) until various technical issues were resolved. Those issues involved hardware reliability, communication functions, and the ability of the robots' electronic components to withstand radiation. Additional sensors and functionality that would enable the robots to respond effectively to the crisis were also needed. Available robots were therefore retrofitted for the disaster reponse missions. First, the radiation tolerance of the electronic componenets was checked by means of gamma ray irradiation tests, which were conducted using the facilities of the Japan Atomic Energy Agency (JAEA). The commercial electronic devices used in the original robot systems operated long enough (more than 100 h at a 10% safety margin) in the assumed environment (100 mGy/h). Next, the usability of wireless communication in the target environment was assessed. Such tests were not possible in the target environment itself, so they were performed at the Hamaoka Daiichi Nuclear Power Plants, which are similar to the target environment. As previously predicted, the test results indicated that robust wireless communication would not be possible in the reactor buildings. It was therefore determined that a wired communication device would need to be installed. After TEPCO's official urgent mission proposal was received, the team mounted additional devices to facilitate the installation of a water gauge in the basement of the reactor buildings to determine flooding levels. While these preparations were taking place, prospective robot operators from TEPCO trained in a laboratory environment. Finally, one of the robots was delivered to the Fukushima Daiichi Nuclear Power Plants on June 20, 2011, where it performed a number of important missions inside the buildings. In this paper, the requirements for the exploration mission in the Fukushima Daiichi Nuclear Power Plants are presented, the implementation is discussed, and the results of the mission are reported. © 2013 Wiley Periodicals, Inc. (Webpage: http://www.astro.mech.tohoku.ac.jp/)

513 citations

Journal ArticleDOI
TL;DR: The results demonstrate that the six-degree-of-freedom trajectory of a passive spring-mounted range sensor can be accurately estimated from laser range data and industrial-grade inertial measurements in real time and that a quality 3-D point cloud map can be generated concurrently using the same data.
Abstract: Three-dimensional perception is a key technology for many robotics applications, including obstacle detection, mapping, and localization. There exist a number of sensors and techniques for acquiring 3-D data, many of which have particular utility for various robotic tasks. We introduce a new design for a 3-D sensor system, constructed from a 2-D range scanner coupled with a passive linkage mechanism, such as a spring. By mounting the other end of the passive linkage mechanism to a moving body, disturbances resulting from accelerations and vibrations of the body propel the 2-D scanner in an irregular fashion, thereby extending the device's field of view outside of its standard scanning plane. The proposed 3-D sensor system is advantageous due to its mechanical simplicity, mobility, low weight, and relatively low cost. We analyze a particular implementation of the proposed device, which we call Zebedee, consisting of a 2-D time-of-flight laser range scanner rigidly coupled to an inertial measurement unit and mounted on a spring. The unique configuration of the sensor system motivates unconventional and specialized algorithms to be developed for data processing. As an example application, we describe a novel 3-D simultaneous localization and mapping solution in which Zebedee is mounted on a moving platform. Using a motion capture system, we have verified the positional accuracy of the sensor trajectory. The results demonstrate that the six-degree-of-freedom trajectory of a passive spring-mounted range sensor can be accurately estimated from laser range data and industrial-grade inertial measurements in real time and that a quality 3-D point cloud map can be generated concurrently using the same data.

402 citations

Proceedings ArticleDOI
19 Dec 2011
TL;DR: To succeed in the above two missions, the mobile robot, Quince, was redesigned and performed repeated operational test to improve it, and one of the robots was delivered to the Fukushima Daiichi Nuclear Power Station on June 20, 2011.
Abstract: On March 11, 2011, a massive earthquake and tsunami hit eastern Japan, particularly affecting the Tohoku area. Since then, the Fukushima Daiichi Nuclear Power Station has been facing a crisis. To respond to this crisis, we considered using our rescue robots for surveillance missions. Before delivering a robot to TEPCO (Tokyo Electric Power Company), we needed to solve some technical issues and add some functions to respond to this crisis. Therefore, we began a redesign project to equip the robot for disaster response missions. TEPCO gave us two specific missions. One was to explore the inside and outside of the reactor buildings to perform dose measurements. The other one was to sample contaminated water and install a water gauge in the basement of the reactor buildings. To succeed in the above two missions, we redesigned our mobile robot, Quince, and performed repeated operational test to improve it. Finally, one of the robots was delivered to the Fukushima Daiichi Nuclear Power Station on June 20, 2011. In this paper, we will introduce the requirements for the above two missions and report how we fulfilled them.

197 citations