scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
06 May 2013
TL;DR: An urban operation of unmanned aerial vehicles (UAVs) demands a high level of autonomy for tasks presented in a cluttered environment, and a limited payload allows only low-grade sensors for state estimation and control.
Abstract: An urban operation of unmanned aerial vehicles (UAVs) demands a high level of autonomy for tasks presented in a cluttered environment. While fixed-wing UAVs have been well suited for long-endurance missions at a high altitude, their navigation inside an urban area brings more challenges in motion planning and control. The inability to hover and low agility in motion cause more difficulties on planning a feasible path in a compact region, and a limited payload allows only low-grade sensors for state estimation and control.

5 citations

Proceedings Article
22 Aug 1983
TL;DR: In this article, an appropriate arrangement of lengths between phalanges for a multijointed finger is presented based on the simulation results, and the authors propose the wrap-up rate to be used for the evaluation of the stability of grasping by using these factors.
Abstract: An appropriate arrangement of finger joints is very important in designing multijointed fingers since the stability of grasping an object greatly depends on that arrangement. Multijointed fingers can grasp an object with many points of contact each of which is pressed against the object as if wrapping up that object. The amount of the wrapped up area and the form of the finger when an objected is grasped are therefore important factors for determining the stability of the grasping. We propose the wrap-up rate to be used for the evaluation of the stability of grasping by using these factors. We consider twenty eight models for the finger having three joints, and perform a simulation of their ability to grasp various shapes stably. Based on the simulation results, an appropriate arrangement of lengths between phalanges for a multijointed finger is presented.

5 citations

Proceedings ArticleDOI
13 Jan 2004
TL;DR: This paper describes a method for robustly detecting and efficiently recognizing daily human behavior in the real world that involves real-world sensorization using ultrasonic tags to robustly observe behavior and virtual sensorization of virtualized objects in order to quickly register the handling of objects in thereal world and efficiently recognize specific human behavior.
Abstract: This paper describes a method for robustly detecting and efficiently recognizing daily human behavior in the real world. The proposed method involves real-world sensorization using ultrasonic tags to robustly observe behavior, real-world virtualization to create a virtual environment by modeling real objects using a stereovision system, and virtual sensorization of virtualized objects in order to quickly register the handling of objects in the real world and efficiently recognizing specific human behavior. A behavior-to-speech system created based on this recognition method is also presented as a new application of this technology.

5 citations

Proceedings ArticleDOI
07 Mar 2014
TL;DR: This talk will describe a new DMD-based design for a headlight that can be programmed to perform several tasks simultaneously and that can sense, react and adapt quickly to any environment with the goal of increasing safety for all drivers on the road.
Abstract: The primary goal of a vehicular headlight is to improve safety in low-light and poor weather conditions. The typical headlight however has very limited flexibility - switching between high and low beams, turning off beams toward the opposing lane or rotating the beam as the vehicle turns - and is not designed for all driving environments. Thus, despite decades of innovation in light source technology, more than half of the vehicular accidents still happen at night even with much less traffic on the road. We will describe a new DMD-based design for a headlight that can be programmed to perform several tasks simultaneously and that can sense, react and adapt quickly to any environment with the goal of increasing safety for all drivers on the road. For example, we will be able to drive with high-beams without glaring any other driver and we will be able to see better during rain and snowstorms when the road is most treacherous to drive. The headlight can also increase contrast of lanes, markings and sidewalks and can alert drivers to sudden obstacles. In this talk, we will lay out the engineering challenges in building this headlight and share our experiences with the prototypes developed over the past two years.

5 citations

Posted Content
TL;DR: This work proposes an Ensemble of Robust Constrained Local Models that comprises of a deformable shape and local landmark appearance model and reasons over binary occlusion labels for alignment of faces in the presence of significant occlusions and of any unknown pose and expression.
Abstract: We propose an Ensemble of Robust Constrained Local Models for alignment of faces in the presence of significant occlusions and of any unknown pose and expression. To account for partial occlusions we introduce, Robust Constrained Local Models, that comprises of a deformable shape and local landmark appearance model and reasons over binary occlusion labels. Our occlusion reasoning proceeds by a hypothesize-and-test search over occlusion labels. Hypotheses are generated by Constrained Local Model based shape fitting over randomly sampled subsets of landmark detector responses and are evaluated by the quality of face alignment. To span the entire range of facial pose and expression variations we adopt an ensemble of independent Robust Constrained Local Models to search over a discretized representation of pose and expression. We perform extensive evaluation on a large number of face images, both occluded and unoccluded. We find that our face alignment system trained entirely on facial images captured "in-the-lab" exhibits a high degree of generalization to facial images captured "in-the-wild". Our results are accurate and stable over a wide spectrum of occlusions, pose and expression variations resulting in excellent performance on many real-world face datasets.

5 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations