scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Patent
28 Apr 2020
TL;DR: In this paper, an image processing system detects changes in an object, such as damage to an automobile, by comparing a base object model, which depicts the object in an expected condition, to one or more target object images depicting the object as depicted in a target object image.
Abstract: An image processing system detects changes in an object, such as damage to an automobile, by comparing a base object model, which depicts the object in an expected condition, to one or more target object images depicting the object in the changed condition. The image processing system aligns the object, as depicted in the base object model, with the object as depicted in a target object image. The image processing system then determines contours of the target object within the target object image by overlaying the aligned base object model with the target object image, and removes background pixels or other extraneous information based on this comparison. The image processing system may also determine various different components, such as body panels of an automobile, based on this overlay. The image processing system may then perform a statistical processing routine on the target object, or the components of the target object, as identified in the target object image, to detect changes, the likelihood of changes, and/or a quantification of an amount or type of change, to the target object as depicted in the target object image as compared to the base object model.

1 citations

Journal ArticleDOI
TL;DR: A statistical model is introduced for evaluating the impact of local intensity variation and scene disparity variation and a method of selecting an appropriate window size to minimize the uncertainty of the estimation is proposed.
Abstract: This paper describes a stereo matching algorithm capable of selecting an appropriate window size to achieve both objectives of precise localization and stable estimation in scene correspondence. Window size is an important parameter that depends on two local attributes: local intensity variation and scene disparity variation. A statistical model is introduced for evaluating the impact of these two types of variations on the uncertainty of disparity estimation and proposes a method of selecting an appropriate window size to minimize the uncertainty of the estimation. Experiments have been conducted for various window sizes. The experimental results demonstrate the effectiveness of the proposed model and the matching algorithm with an adaptive window.

1 citations

01 Jan 2009
TL;DR: A Bayesian inference algorithm is developed to generate a large number of shape hypotheses from randomly sampled partial shapes and the hypotheses are evaluated in the robust estimation framework to find the optimal one.
Abstract: The grand goal of computer vision is to provide a complete semantic interpretation of an input image by reasoning about the 3d scene that generated it. Object detection, recognition, and alignment are three fundamental vision tasks towards this goal. In this thesis, we develop a series of efficient algorithms to address these problems. The contributions are summarized as follows. (1) We present a two-step algorithm for specific object detection in cluttered background with a few example images and unknown camera poses. Instead of enforcing metric constraints on the local features, we utilize a set of ordering constraints which are powerful enough for the detection task. At the core of this algorithm is a qualitative feature matching scheme which includes an angular ordering constraint in local scale and a graph planarity constraint in global scale. (2) We present a part-based model for object categorization and part localization. The spatial interactions among parts are modeled by Factor Analysis which can be learned from the data. Constrained by the shape prior, part localization proceeds in the image space by using a triangulated Markov random field (TMRF) model. We propose an iterative shape estimation and regularization approach for efficient computation. (3) We propose a boosting procedure for simultaneous multi-view car detection . By combining the multi-class LogitBoost and AdaBoost detectors, we decompose the original problem to view classification and view-specific detection, which can be solved independently. We study various feature representations and weak learners for the boosting algorithms. Extensive experiments demonstrate improved accuracy and detection rate over the traditional algorithms. (4) We propose a Bayesian framework for robust shape alignment. Prior models assume Gaussian observation noise and attempt to fit a regularized shape using all the observed data, such an assumption is vulnerable to outrageous local features and occlusions. We address this problem by using a hypothesis-and-test approach. A Bayesian inference algorithm is developed to generate a large number of shape hypotheses from randomly sampled partial shapes. The hypotheses are then evaluated in the robust estimation framework to find the optimal one. Our model can effectively handle outliers and recover the underlying object shape. The proposed approach is evaluated on a very challenging dataset which spans a wide variety of car types, viewpoints, background scenes, and occlusion patterns.

1 citations

01 Jan 1987
TL;DR: The Camegie Mellon Warp machine as discussed by the authors has been used for low-level vision and robot vehicle control algorithms for a number of years and has been widely used in the field of computer vision.
Abstract: The parallel vision algorithm design and implementation project was established to facilitate vision programming on parallel architectures, particularly low-level vision and robot vehicle control algorithms on the Camegie Mellon Warp machine. To this end, we have (1) demonstrated the use of the Warp machine in several different algorithms; (2) developed a specialized programming language, called Apply, for low-level vision programming on parallel architectures in general, and Warp in particular; (3) used Warp as a research tool in vision, as opposed to using it only for research in parallel vision; (4) developed a significant library of low-level vision programs for use on Warp.

1 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations