scispace - formally typeset
Search or ask a question
Author

Masahiro Tomono

Bio: Masahiro Tomono is an academic researcher from Chiba Institute of Technology. The author has contributed to research in topics: Mobile robot & Robot. The author has an hindex of 14, co-authored 63 publications receiving 692 citations. Previous affiliations of Masahiro Tomono include University of Tokyo & Toyo University.


Papers
More filters
Proceedings ArticleDOI
12 May 2009
TL;DR: The proposed method estimates camera poses and builds detailed 3D maps robustly by aligning edge points between frames using the ICP algorithm and in indoor experiments, the method successfully built detailed3D maps even under noisy condition.
Abstract: Most vision-based SLAM systems utilize corner-like features, and may be unstable in non-textured environments where only a few corner features can be extracted. To cope with this problem, we employ edge points to perform SLAM with a stereo camera. The edge-point based SLAM is applicable to non-textured environments since plenty of edge points can be obtained even from a small number of lines. The proposed method estimates camera poses and builds detailed 3D maps robustly by aligning edge points between frames using the ICP algorithm. In indoor experiments, the method successfully built detailed 3D maps even under noisy condition.

89 citations

Proceedings ArticleDOI
Masahiro Tomono1
01 Oct 2006
TL;DR: This paper proposes a framework to integrate dense shape and recognition features into an object model, and shows that an object map of a room was built successfully using the proposed object models.
Abstract: This paper presents a method of object map building using object models created from image sequences captured by a single camera. Object map is a highly structured map, which is built by placing 3-D object models on the floor plane according to object recognition results. To increase the efficiency of object map building, we propose a framework to integrate dense shape and recognition features into an object model. Experimental results show that an object map of a room was built successfully using the proposed object models.

54 citations

Proceedings ArticleDOI
03 Dec 2010
TL;DR: A light-weight sensor platform that consists of gyro-assisted odometry and a 3D laser scanner for localization of human-scale robots and successfully navigated the assigned 1-km course in a fully autonomous mode multiple times is proposed.
Abstract: This paper proposes a light-weight sensor platform that consists of gyro-assisted odometry and a 3D laser scanner for localization of human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. Robust and computationally inexpensive localization is implemented on the sensor platform using a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were performed at the Tsukuba Challenge 2009, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km course in a fully autonomous mode multiple times.

50 citations

Proceedings ArticleDOI
15 May 2006
TL;DR: A grasp planning proposed in this paper can find a stable grasp pose from the automatically generated model which contains redundant data and the shape error of the object.
Abstract: This paper describes a grasp planning for a mobile manipulator which works in real environment. Mobile robot studies up to now that manipulate an object in real world practically use ID tag on an object or an object model which is given to the robot in advance. The authors aim to develop a mobile manipulator that can acquire an object model through video images and can manipulate the object. In this approach, the robot can manipulate an unknown object autonomously. A grasp planning proposed in this paper can find a stable grasp pose from the automatically generated model which contains redundant data and the shape error of the object. Experiments show the effectiveness of the proposed method

43 citations

Proceedings ArticleDOI
24 Apr 2000
TL;DR: A robot system for navigation in unknown environments, in which the robot navigates itself to the room designated by room number, and utilizes the model for the efficient recognition of the objects and the estimation of their positions.
Abstract: Navigation in unknown environments requires the robot to obtain the destination positions without a map. The utilization of model-based object recognition would be a solution, where the robot can estimate the destination positions from geometric relationships between the recognized objects and the robot. This paper presents a robot system for this kind of navigation, in which the robot navigates itself to the room designated by room number. The robot has an environment model including a corridor and a door with a room number plate, and utilizes the model for the efficient recognition of the objects and the estimation of their positions.

34 citations


Cited by
More filters
Proceedings ArticleDOI
01 May 2017
TL;DR: This article proposed an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization and generalizes across targets and scenes.
Abstract: Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.

1,394 citations

Proceedings ArticleDOI
14 May 2012
TL;DR: SeqSLAM as mentioned in this paper calculates the best candidate matching location within every local navigation sequence and localization is then achieved by recognizing coherent sequences of these "local best matches" by removing the need for global matching performance by the vision front-end.
Abstract: Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.

756 citations

Journal Article
TL;DR: A new approach to visual navigation under changing conditions dubbed SeqSLAM, which removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images.
Abstract: Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.

686 citations

Journal ArticleDOI
TL;DR: The outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... the recent concept of visual sonar has also been revised.
Abstract: Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised.

649 citations

Journal ArticleDOI
TL;DR: The requirements for the exploration mission in the Fukushima Daiichi Nuclear Power Plants are presented, the implementation is discussed, and the results of the mission are reported.
Abstract: On March 11, 2011, a massive earthquake (magnitude 9.0) and accompanying tsunami hit the Tohoku region of eastern Japan. Since then, the Fukushima Daiichi Nuclear Power Plants have been facing a crisis due to the loss of all power that resulted from the meltdown accidents. Three buildings housing nuclear reactors were seriously damaged from hydrogen explosions, and, in one building, the nuclear reactions became out of control. It was too dangerous for humans to enter the buildings to inspect the damage because radioactive materials were also being released. In response to this crisis, it was decided that mobile rescue robots would be used to carry out surveillance missions. The mobile rescue robots needed could not be delivered to the Tokyo Electric Power Company (TEPCO) until various technical issues were resolved. Those issues involved hardware reliability, communication functions, and the ability of the robots' electronic components to withstand radiation. Additional sensors and functionality that would enable the robots to respond effectively to the crisis were also needed. Available robots were therefore retrofitted for the disaster reponse missions. First, the radiation tolerance of the electronic componenets was checked by means of gamma ray irradiation tests, which were conducted using the facilities of the Japan Atomic Energy Agency (JAEA). The commercial electronic devices used in the original robot systems operated long enough (more than 100 h at a 10% safety margin) in the assumed environment (100 mGy/h). Next, the usability of wireless communication in the target environment was assessed. Such tests were not possible in the target environment itself, so they were performed at the Hamaoka Daiichi Nuclear Power Plants, which are similar to the target environment. As previously predicted, the test results indicated that robust wireless communication would not be possible in the reactor buildings. It was therefore determined that a wired communication device would need to be installed. After TEPCO's official urgent mission proposal was received, the team mounted additional devices to facilitate the installation of a water gauge in the basement of the reactor buildings to determine flooding levels. While these preparations were taking place, prospective robot operators from TEPCO trained in a laboratory environment. Finally, one of the robots was delivered to the Fukushima Daiichi Nuclear Power Plants on June 20, 2011, where it performed a number of important missions inside the buildings. In this paper, the requirements for the exploration mission in the Fukushima Daiichi Nuclear Power Plants are presented, the implementation is discussed, and the results of the mission are reported. © 2013 Wiley Periodicals, Inc. (Webpage: http://www.astro.mech.tohoku.ac.jp/)

513 citations