scispace - formally typeset
Search or ask a question
Author

Naokazu Yokoya

Bio: Naokazu Yokoya is an academic researcher from Nara Institute of Science and Technology. The author has contributed to research in topics: Augmented reality & Wearable computer. The author has an hindex of 36, co-authored 280 publications receiving 5261 citations. Previous affiliations of Naokazu Yokoya include National Archives and Records Administration & McGill University.


Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes a new algorithm of range data registration and segmentation that is robust in the presence of outlying points (outliers) like noise and occlusion and integrates the inliers obtained from multiple range images to construct a data set representing an entire object.

304 citations

Journal ArticleDOI
TL;DR: The essential problems in IDB design are pointed out rather than classify the existing or proposed systems into an unestablished framework.

270 citations

Journal ArticleDOI
TL;DR: The authors describe a hybrid approach to the problem of image segmentation in range data analysis, where hybrid refers to a combination of both region- and edge-based considerations.
Abstract: The authors describe a hybrid approach to the problem of image segmentation in range data analysis, where hybrid refers to a combination of both region- and edge-based considerations. The range image of 3-D objects is divided into surface primitives which are homogeneous in their intrinsic differential geometric properties and do not contain discontinuities in either depth of surface orientation. The method is based on the computation of partial derivatives, obtained by a selective local biquadratic surface fit. Then, by computing the Gaussian and mean curvatures, an initial region-gased segmentation is obtained in the form of a curvature sign map. Two additional initial edge-based segmentations are also computed from the partial derivatives and depth values, namely, jump and roof-edge maps. The three image maps are then combined to produce the final segmentation. Experimental results obtained for both synthetic and real range data of polyhedral and curved objects are given. >

257 citations

Proceedings ArticleDOI
25 Aug 1996
TL;DR: A new algorithm of registration and integration of multiple range images for producing a geometric object surface model that determines a set of rigid motion parameters that register a range image to a given mesh-based geometric model is proposed.
Abstract: Registration and integration of measured data of real objects are becoming important in 3D modeling for computer graphics and computer-aided design. We propose a new algorithm of registration and integration of multiple range images for producing a geometric object surface model. The registration algorithm determines a set of rigid motion parameters that register a range image to a given mesh-based geometric model. The algorithm is an integration of the iterative closest point algorithm with the least median of squares estimator. After registration, points in the input range are classified into inliers and outliers according to registration errors between each data point and the model. The outliers are appended to the surface model to be used by registration with the following range images. The parts classified as inlier by at least one registration result is segmented out to be integrated. This process consisting of registration and integration is iterated until all views are integrated. We successfully experimented with the proposed method on real range image sequences taken by a rangefinder. The method does not need any preliminary processes.

216 citations

Proceedings ArticleDOI
18 Mar 2000
TL;DR: A method for augmented reality with a stereo vision sensor and a video see-through head-mounted display (HMD) that can synchronize the display timing between the virtual and real worlds so that the alignment error is reduced.
Abstract: In an augmented reality system, it is required to obtain the position and orientation of the user's viewpoint in order to display the composed image while maintaining a correct registration between the real and virtual worlds. All the procedures must be done in real time. This paper proposes a method for augmented reality with a stereo vision sensor and a video see-through head-mounted display (HMD). It can synchronize the display timing between the virtual and real worlds so that the alignment error is reduced. The method calculates camera parameters from three markers in image sequences captured by a pair of stereo cameras mounted on the HMD. In addition, it estimates the real-world depth from a pair of stereo images in order to generate a composed image maintaining consistent occlusions between real and virtual objects. The depth estimation region is efficiently limited by calculating the position of the virtual object by using the camera parameters. Finally, we have developed a video see-through augmented reality system which mainly consists of a pair of stereo cameras mounted on the HMD and a standard graphics workstation. The feasibility of the system has been successfully demonstrated with experiments.

163 citations


Cited by
More filters
Proceedings ArticleDOI
01 May 2001
TL;DR: An implementation is demonstrated that is able to align two range images in a few tens of milliseconds, assuming a good initial guess, and has potential application to real-time 3D model acquisition and model-based tracking.
Abstract: The ICP (Iterative Closest Point) algorithm is widely used for geometric alignment of three-dimensional models when an initial estimate of the relative pose is known. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy. We enumerate and classify many of these variants, and evaluate their effect on the speed with which the correct alignment is reached. In order to improve convergence for nearly-flat meshes with small features, such as inscribed surfaces, we introduce a new variant based on uniform sampling of the space of normals. We conclude by proposing a combination of ICP variants optimized for high speed. We demonstrate an implementation that is able to align two range images in a few tens of milliseconds, assuming a good initial guess. This capability has potential application to real-time 3D model acquisition and model-based tracking.

4,059 citations

Journal ArticleDOI
TL;DR: Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches, which addresses the issue of quantitative evaluation of segmentation results.

3,527 citations

Journal ArticleDOI
TL;DR: The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multidimensional indexing, and system design, three of the fundamental bases of content-based image retrieval.

2,197 citations

Journal ArticleDOI
TL;DR: There are several image segmentation techniques, some considered general purpose and some designed for specific classes of images as discussed by the authors, some of which can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid link growing scheme, centroid region growing scheme and split-and-merge scheme.
Abstract: There are now a wide Abstract There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.

2,009 citations

Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations