scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a method for extracting license plate regions by using a neural network that is trained to output the license plate's center of gravity with focus on the relationship that learning patterns have in neural networks.
Abstract: This paper proposes a method for extracting license plate regions by using a neural network that is trained to output the license plate's center of gravity. Focus is on the relationship that learning patterns have in neural networks.

4 citations

Proceedings ArticleDOI
01 Jan 2004
TL;DR: This paper presents a technique for perfecting the 3D surface model by repositioning its vertices so that it is coherent with a set of observed images of the object, referred to as EigenFairing.
Abstract: A surface is often modeled as a triangulated mesh of 3D points and textures associated with faces of the mesh. The 3D points could be either sampled from range data or derived from a set of images using a stereo or Structurefrom-Motion algorithm. When the points do not lie at critical points of maximum curvature or discontinuities of the real surface, faces of the mesh do not lie close to the modeled surface. This results in textural artifacts, and the model is not perfectly coherent with a set of actual images—the ones that are used to texture-map its mesh. This paper presents a technique for perfecting the 3D surface model by repositioning its vertices so that it is coherent with a set of observed images of the object. The textural artifacts and incoherence with images are due to the non-planarity of a surface patch being approximated by a planar face, as observed from multiple viewpoints. Image areas from the viewpoints are used to represent texture for the patch in eigenspace. The eigenspace representation captures variations of texture, which we seek to minimize. A coherence measure based on the difference between the face textures reconstructed from eigenspace and the actual images is used to reposition the vertices so that the model is improved or faired. We refer to this technique of model refinement as EigenFairing, by which the model is faired, both geometrically and texturally, to better approximate the real surface.

4 citations

01 Jul 1987
TL;DR: Researchers have designed and are constructing a Reconfigurable Modular Manipulator System (RMMS), which supports identification of modules, sensing of joint states, and commands to the joint actuator.
Abstract: Using manipulators with a fixed configuration for specific tasks is appropriate when the task requirements are known beforehand. However, in less predictable situations, such as an outdoor construction site or aboard a space station, a manipulator system requires a wide range of capabilities, probably beyond the limitations of a single, fixed-configuration manipulator. To fulfill this need, researchers have been working on a Reconfigurable Modular Manipulator System (RMMS). Researchers have designed and are constructing a prototype RMMS. The prototype currently consists of two joint modules and four link modules. The joints utilize a conventional harmonic drive and torque motor actuator, with a small servo amplifier included in the assembly. A brushless resolver is used to sense the joint position and velocity. For coupling the modules together, a standard electrical connector and V-band clamps for mechanical connection are used, although more sophisticated designs are under way for future versions. The joint design yields an output torque to 50 ft-lbf at joint speeds up to 1 radian/second. The resolver and associated electronics have resolutions of 0.0001 radians, and absolute accuracies of plus or minus 0.001 radians. Manipulators configured from these prototype modules will have maximum reaches in the 0.5 to 2 meter range. The real-time RMMS controller consists of a Motorola 68020 single-board computer which will perform real time servo control and path planning of the manipulator. This single board computer communicates via shared memory with a SUN3 workstation, which serves as a software development system and robot programming environment. Researchers have designed a bus communication network to provide multiplexed communication between the joint modules and the computer controller. The bus supports identification of modules, sensing of joint states, and commands to the joint actuator. This network has sufficient bandwidth to allow servo sampling rates in excess of 500 Hz.

4 citations

Book
01 Jan 2009
TL;DR: It is shown that process mining provides a wealth of opportunities for people doing research on Petri nets and related models of concurrency, and a range of process discovery and conformance checking techniques are described.
Abstract: Process mining seeks the confrontation between modeled behavior and observed behavior. In recent years, process mining techniques managed to bridge the gap between traditional model-based process analysis (e.g., simulation and other business process management techniques) and data-centric analysis techniques such as machine learning and data mining. Process mining is used by many data-driven organizations as a means to improve performance or to ensure compliance. Traditionally, the focus was on the discovery of process models from event logs describing real process executions. However, process mining is not limited to process discovery and also includes conformance checking. Process models (discovered or hand-made) may deviate from reality. Therefore, we need powerful means to analyze discrepancies between models and logs. These are provided by conformance checking techniques that first align modeled and observed behavior, and then compare both. The resulting alignments are also used to enrich process models with performance related information extracted from the event log. This tutorial paper focuses on the control-flow perspective and describes a range of process discovery and conformance checking techniques. The goal of the paper is to show the algorithmic challenges in process mining. We will show that process mining provides a wealth of opportunities for people doing research on Petri nets and related models of concurrency.

4 citations

01 Aug 1988
TL;DR: In this paper, the authors describe progress in vision and navigation for outdoor mobile robots at the Carnegie Mellon Robotics Institute during 1987, focusing on guiding outdoor autonomous vehicles in very difficult scenes, without relying on strong a priori road color or shape models.
Abstract: : This report describes progress in vision and navigation for outdoor mobile robots at the Carnegie Mellon Robotics Institute during 1987. This research centers on guiding outdoor autonomous vehicles. In 1987 we concentrated on five areas: 1) Road following. We expanded our road tracking system to better handle shadows and bright sunlight. 2) Range data interpretation. Our range interpretation work has expanded from processing a single frame, to combining several frames of data into a terrain map. 3) Expert systems for image interpretation. We explored finding roads in very difficult scenes, without relying on strong a priori road color or shape models. 4) Car recognition. We recognize cars in color images by a hierarchy of grouping image features, and predicting where to look for other image features. 5) Geometric camera calibration. Our new method for calibration avoids complex non-linear optimizations found in other calibration schemes.

4 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations