scispace - formally typeset
S

Shotaro Kojima

Researcher at Tohoku University

Publications -  34
Citations -  127

Shotaro Kojima is an academic researcher from Tohoku University. The author has contributed to research in topics: Computer science & Motion control. The author has an hindex of 5, co-authored 24 publications receiving 68 citations. Previous affiliations of Shotaro Kojima include Japan Society for the Promotion of Science.

Papers
More filters
Journal ArticleDOI

Consistent map building in petrochemical complexes for firefighter robots using SLAM based on GPS and LIDAR

TL;DR: This paper describes two Rao-Blackwellized particle filters based on GPS and light detection and ranging (LIDAR) as SLAM solutions for firefighter robots for petrochemical complexes and proposes the use of Fast-SLAM to combine GPS and LIDAR.
Proceedings ArticleDOI

3D graph based stairway detection and localization for mobile robots

TL;DR: This paper develops a graph-based stairway detection method for point cloud data, that can detect a large variety of stairways and its accuracy is higher than those of most state-of-the-art stairways detection methods even in case of sparse point cloudData.
Journal ArticleDOI

Robust stairway-detection and localization method for mobile robots using a graph-based model and competing initializations

TL;DR: A three-dimensional graph-based stairway-detection method combined with competing initializations that accurately detects and estimates the stairway parameters with an average error of only 2 .
Proceedings ArticleDOI

Use of active scope camera in the Kumamoto Earthquake to investigate collapsed houses

TL;DR: An investigation using the active scope camera to examine the interiors of the collapsed houses and determined the constraints to be considered for the robot operation in disaster areas.
Proceedings ArticleDOI

Fusion of Camera and Lidar Data for Large Scale Semantic Mapping

TL;DR: This work presents a strategy to utilize the recent advancements in semantic segmentation of images, fuse the information extracted from the camera stream with accurate depth measurements of a Lidar sensor in order to create large scale semantic labeled point clouds of the environment.