scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
09 Apr 1991
TL;DR: A very fast lightstripe rangefinder based on an IC array of photoreceptor and analog signal processor cells which acquires 1000 frames of range image per second-two orders of magnitude faster than currently available rangefinding methods is presented.
Abstract: The authors present a very fast lightstripe rangefinder based on an IC array of photoreceptor and analog signal processor cells which acquires 1000 frames of range image per second-two orders of magnitude faster than currently available rangefinding methods. Unlike a conventional lightstripe range-finder, which obtains a frame of range image by the step-and-repeat process of projecting a stripe and grabbing and analyzing a camera image, the VLSI sensor array of this rangefinder gathers range data in parallel as a scene is swept continuously by a moving stripe. Each cell continuously monitors the output of its photoreceptor, and detects and remembers the time at which it observed the peak incident light intensity during the sweep of the stripe. Prototype rangefinding systems have been built using a 28*32 array of these sensing elements. >

79 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed an adaptive control scheme in joint space, and presented a simulation study to demonstrate its effectiveness and computational procedure, and identified two potential problem: unavailability of joint trajectory because the mapping from inertial space trajectory is dynamic-dependent and subject to uncertainty.
Abstract: In space application, robot system are subject to unknown or unmodeled dynamics, for example, in the tasks of transporting an unknown payload or catching an unmodeled moving object. We discuss the parameterization problem in dynamic structure and adaptive control of a space robot system with an attitude-controlled base to which the robot is attached. We first derive the system kinematic and dynamic equations based on Lagrangian dynamics and the linear momentum conservation law. Based on the dynamic model developed, we discuss the problem of linear parameterization in term of dynamic parameters, and find that in joint space, the dynamics can be linearized by a set of combined dynamic parameters; however, in inertial space linear parameterization is impossible in general. Then we propose an adaptive control scheme in joint space, and present a simulation study to demonstrate its effectiveness and computational procedure. Because most takes are specified in inertial space instead of joint space, we discuss the issues associated to adaptive control in inertial space and identify two potential problem: unavailability of joint trajectory because the mapping from inertial space trajectory is dynamic-dependent and subject to uncertainty; and nonlinear parameterization in inertial space. We approach the problem by making use of the proposed joint space adaptive controller and updating the joint trajectory by the estimated dynamic parameters and given trajectory in inertial space. >

78 citations

Journal ArticleDOI
TL;DR: The theory which governs constraints under orthography which governs shadows cast by polyhedra and curved surfaces is described, and some methods are presented for combining shadow geometry with other gradient space techniques for 3D shape inference.
Abstract: Given a line drawing from an image with shadow regions identified, the shapes of the shadows can be used to generate constraints on the orientations of the surfaces involved. This paper describes the theory which governs those constraints under orthography. A “Basic Shadow Problem” is first posed, in which there is a single light source, and a single surface casts a shadow on another (background) surface. There are six parameters to determine: the orientation (two parameters) for each surface, and the direction of the vector (two parameters) pointing at the light source. If some set of three of these are given in advance, the remaining three can then be determined geometrically. The solution method consists of identifying “illumination surfaces” consisting of illumination vectors, assigning Huffman-Clowes line labels to their edges, and applying the corresponding constraints in gradient space. The analysis is extended to shadows cast by polyhedra and curved surfaces. In both cases, the constraints provided by shadows can be analyzed in a manner analogous to the Basic shadow Problem. When the shadow falls upon a polyhedron or curved surface, similar techniques apply. The consequences of varying the position and number of light sources are also discussed. Finally, some methods are presented for combining shadow geometry with other gradient space techniques for 3D shape inference.

78 citations

Proceedings ArticleDOI
04 Jan 1998
TL;DR: This work uses large-to-small full-resolution regions without blurring images, and simultaneously optimizes the coarser and finer parts of optical flow so that the large and small motion can be estimated correctly.
Abstract: A motion estimation algorithm using wavelet approximation as an optical flow model has been developed to estimate accurate dense optical flow from an image sequence. This wavelet motion model is particularly useful in estimating optical flows with large displacement. Traditional pyramid methods which use the coarse-to-fine image pyramid by image burring in estimating optical flow often produce incorrect results when the coarse-level estimates contain large errors that cannot be corrected at the subsequent finer levels. This happens when regions of low texture become flat or certain patterns result in spatial aliasing due to image blurring. Our method, in contrast, uses large-to-small full-resolution regions without blurring images, and simultaneously optimizes the coarser and finer parts of optical flow so that the large and small motion can be estimated correctly. We compare results obtained by using our method with those obtained by using one of the leading optical flow methods, the Szeliski pyramid spline-based method. The experiments include cases of small displacement (less than 4 pixels under 128/spl times/128 image size or equivalent displacement under other image sizes), and those of large displacement (10 pixels). While both methods produce comparable results when the displacements are small, our method outperforms pyramid spline-based method when the displacements are large.

75 citations

Patent
12 Feb 2002
TL;DR: In this paper, a plurality of camera systems relative to a scene such that the camera systems define a gross trajectory is used to generate a video image sequence, which is then used to display the transformed images in sequence corresponding to the position of corresponding camera systems along the gross trajectory.
Abstract: A method and a system of generating a video image sequence. According to one embodiment, the method includes positioning a plurality of camera systems relative to a scene such that the camera systems define a gross trajectory. The method further includes transforming images from the camera systems to superimpose a secondary induced motion on the gross trajectory. And the method includes displaying the transformed images in sequence corresponding to the position of the corresponding camera systems along the gross trajectory.

75 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations