scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
05 Dec 1988
TL;DR: This paper concentrates on sensor modeling and its relationship with strategy generation, because it is regarded as the bottle neck to automatic generation of object recognition programs.
Abstract: One of the most important and systematic methods to build modelbased vision systems is that to generate object recognition programs automatically from given geometric models. Automatic generation of object recognition programs requires several key components to be developed: object models to describe the geometric and photometric properties of an object to be recognized, sensor models to predict object appearances from the object model under a given sensor, strategy generation using the pred,icted appearances to produce an recognition strategy, and program generation converting the recognition strategy into an executable code. This paper concentrates on sensor modeling and its relationship with strategy generation, because we regard it as the bottle neck to automatic generation of object recognition programs. We consider two aspects of sensor characteristics: sensor detectability and sensor reliability. Sensor detectability specifies what kinds of features can be detected and in what condition the features are detected; sensor reliability is a confidence for the detected features. We define the configuration space to represent sensor characteristics. We propose a representation method for sensor detectability and rcliability in the configuration space. Finally, we investigate how to use the proposed sensor modcl in automatic generation of an objcct recognition program.

34 citations

Proceedings ArticleDOI
10 Aug 2008
TL;DR: In this article, a projector-camera system is proposed to measure the shape of the naked foot while walking or running, and a characteristic pattern is set on the projector, so that correspondence between the projection pattern and the camera captured image can be solved easily.
Abstract: Recently, techniques for measuring and modeling of human body are recieving attention, because human models are useful for ergonomic design in manufacturing. We aim to accurately measure the dynamic shape of human foot in motion (i.e. walking or running). Such measurement is profitable for shoe design and sports analysis. In this paper, a projector-camera system is proposed to measure the shape of the naked foot while walking or running. A characteristic pattern is set on the projector, so that correspondence between the projection pattern and the camera captured image can be solved easily. Because pattern switching is not required, the system can measure foot shape even when the foot is in motion. The proposed method trades "density of measurement" for "stability of matching", but the reduced density is sufficient for our purpose.

34 citations

Proceedings ArticleDOI
10 Apr 1989
TL;DR: In this article, a method for determining the shape of surfaces whose reflectance properties can vary from Lambertian to specular, without prior knowledge of the relative strengths of the Lambertian and specular components of reflection is presented.
Abstract: A method is presented for determining the shape of surfaces whose reflectance properties can vary from Lambertian to specular, without prior knowledge of the relative strengths of the Lambertian and specular components of reflection The object surface is illuminated using extended light sources and is viewed from a single direction Surface illumination using extended sources makes it possible to ensure the detection of both Lambertian and specular reflections Multiple source directions are used to obtain an image sequence of the object An extraction algorithm uses the set of image intensity values measured at each surface point to compute orientation as well as relative strengths of the Lambertian and specular reflection components The method, photometric sampling, uses samples of a photometric function that relates image intensity to surface orientation, reflectance, and light source characteristics Experiments conducted on Lambertian surfaces, specular surfaces, and hybrid surfaces show high accuracy in measured orientations and estimated reflectance parameters >

34 citations

Proceedings Article
01 Jan 1983

34 citations

01 Jan 1988
TL;DR: One major emphasis of this paper is that sensors, as well as objects, must be explicitly modeled in order to achieve the goal of automatic generation of reliable and efficient recognition programs.
Abstract: This paper discusses issues and techniques to automatically compile object and sensor models into a visual recognition strategy for recognizing and locating an object in three-dimensional space from visual data. Historically, and even today, most successful model-based vision programs are handwritten; relevant knowledge of objects for recognition is extracted from examples of the object, tailored for the particular environment, and coded into the program by the implementors. If this is done properly, the resulting program is effective and efficient, but it requires long development time and many vision experts. Automatic generation of recognition programs by compilation attempts to automate this process. In particular, it extracts from the object and sensor models those features that are useful for recognition, and the control sequence which must be applied to deal with possible variations of the object appearances. The key components in automatic generation are: object modeling, sensor modeling, prediction of appearances, strategy generation, and program generation. An object model describes geometric and photometric properties of an object to be recognized. A sensor model specifies the sensor characteristics in predicting object appearances and variations of feature values. The appearances can be systematically grouped into aspects, where aspects are topologically equivalent classes with respect to the object features "visible" to the sensor. Once aspects are obtained, a recognition strategy is generated in the form of an interpretation tree from the aspects and their predicted feature values. An interpretation tree consists of two parts: a part which classifies an unknown region into one of the aspects, and a part which determines its precise attitude (position and orientation) within the classified aspect. Finally, the strategy is converted into a executable program by using object-oriented programming. One major emphasis of this paper is that sensors, as well as objects, must be explicitly modeled in order to achieve the goal of automatic generation of reliable and efficient recognition programs. Actual creation of interpretation trees for two toy objects and their execution for recognition from a bin of parts are demonstrated. University Libraries Carnegie Mellon University Pittsburgh, Pennsylvania 1521

34 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations