scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Patent
06 Apr 2001
TL;DR: In this paper, a logiciel de planification de chirurgie orthopedie assistee par ordinateur is presented, which enables the generation 3D (tridimensionnelle) of modeles d'os solides a partir d'images radiologiques 2D (bidimensionnelles) d'un os de patient.
Abstract: L'invention se rapporte a un logiciel de planification de chirurgie orthopedique assistee par ordinateur permettant la generation 3D (tridimensionnelle) de modeles d'os solides a partir d'images radiologiques 2D (bidimensionnelles) d'un os de patient. Ce logiciel de planification de chirurgie orthopedique assistee par ordinateur reconstruit les contours de l'os en commencant avec un os modele 3D et en deformant l'os modele 3D de maniere a l'adapter sensiblement a la geometrie de l'os du patient. Un module de simulation et de planification chirurgicale du logiciel de planification de chirurgie orthopedique assistee par ordinateur genere un programme chirurgical simule presentant l'animation du processus de distraction osseuse, le type et la taille de la structure de fixation devant etre fixee sur l'os du patient, le programme de fixation de cette structure, l'emplacement du site d'osteotomie/coricotomie, et les programmes de chirurgie sont presentes sous forme de graphiques 3D sur un ecran d'ordinateur de facon a fournir au chirurgien une assistance preoperatoire realiste. Les donnees chirurgicales postoperatoires peuvent etre reintroduites dans le logiciel de planification de chirurgie orthopedique assistee par ordinateur afin de permettre une revision de la trajectoire de la distraction osseuse specifiee anterieurement en fonction de toute difference entre les donnees du programme preoperatoire et les donnees courantes, postoperatoires.
01 Nov 1988
TL;DR: The development of a prototype modular manipulator is discussed as well as the implementation of a configuration independent manipulator kinematics algorithm used for path planning in the prototype.
Abstract: Modular manipulator designs have long been considered for use as research tools, and as the basis for easily modified industrial manipulators. In these manipulators the links and joints are discrete and modular components that can be assembled into a desired manipulator configuration. As hardware advances have made actual modular manipulators practical, various capabilities of such manipulators have gained interest. Particularly desirable is the ability to rapidly reconfigure such a manipulator, in order to custom tailor it to specific tasks. The reconfiguration greatly enhances the capability of a given amount of manipulator hardware. The development of a prototype modular manipulator is discussed as well as the implementation of a configuration independent manipulator kinematics algorithm used for path planning in the prototype.
Journal ArticleDOI
TL;DR: In this article , the authors proposed a method that does not require a reference signal and frequency analysis to obtain the stress amplitude distribution with comparable or higher accuracy than that obtained using self-correlation lock-in thermography.
Abstract: Abstract Background In self-correlation lock-in thermography for thermoelastic stress analysis (TSA), the acquisition position of the reference signal affects the accuracy of the obtained stress amplitude distribution. When the reference signal is not large enough compared to the noise, the stress amplitude distribution may be incorrect. Objective This study proposes a method that does not require a reference signal and frequency analysis to obtain the stress amplitude distribution with comparable or higher accuracy than that obtained using self-correlation lock-in thermography. Methods An observation matrix is generated from the temporal variation across all thermographic pixels to describe the thermal fluctuations due to stress. Thereafter, stress amplitude distribution and the original load signal are extracted from the observation matrix using singular value decomposition (SVD). The proposed method is called SVD thermo-component analysis. To investigate the effectiveness of the proposed method, the reconstructed load signal and stress distribution are obtained from the captured thermal images for the specimen under a sinusoidal load. Results The stress amplitude distribution obtained using the proposed method is equivalent to that obtained using conventional lock-in thermography with the original load signal as the reference signal. In addition, the reconstructed load signal obtained using the proposed method successfully represents the original load signal. Conclusions SVD thermo-component analysis does not require prior knowlege of the evaluated mechanical structure to select a suitable reference-signal acquisition position as in self-correlation lock-in thermography. Therefore, the proposed TSA method reduce analysis failures compared to the conventional method.
Journal Article
TL;DR: A vision based monitoring system which classifies targets (vehicles and humans) based on shape appearance, estimates their colors, and detects special targets, from images of color video cameras set up toward a street.
Abstract: This paper describes a vision based monitoring system which (1) classifies targets (vehicles and humans) based on shape appearance, (2) estimates their colors, and (3) detects special targets, from images of color video cameras set up toward a street. The categories of targets were classified into {human, sedan, van, truck, mule (golf cart for workers), and others), and their colors were classified into the groups of {redorange-yellow, green, blue-lightblue, white-silver-gray, darkblue-darkgray-black, and darkred-darkorange). On the detection of special targets, the test was carried out setting {FedEx van, UPS van, Police Car) as target and yielded desirable results. The system tracks the target, independently conducts category classification and color estimation, extracts the result with the largest probability throughout the tracking sequence from each result, and provides the data as the final decision. For classification and special target detection, we cooperatively used a stochastic linear discrimination method (linear discriminant analysis : LDA) and nonlinear decision rule (K-Nearest Neighbor rule: K-NN).
Journal ArticleDOI
06 Jun 2008
TL;DR: A robust model-based 3D tracking system by programmable graphics hardware to run online at frame-rate during operation of a humanoid robot and to efficiently auto-initialize, which recovers the full 6 degree-of-freedom pose of viewable objects relative to the robot.
Abstract: We have accelerated a robust model-based 3D tracking system by programmable graphics hardware to run online at frame-rate during operation of a humanoid robot and to efficiently auto-initialize. The tracker recovers the full 6 degree-of-freedom pose of viewable objects relative to the robot. Leveraging the computational resources of the GPU for perception has enabled us to increase our tracker’s robustness to the significant camera displacement and camera shake typically encountered during humanoid navigation. We have combined our approach with a footstep planner and a controller capable of adaptively adjusting the height of swing leg trajectories. The resulting integrated perception-planning-action system has allowed an HRP-2 humanoid robot to successfully and rapidly localize, approach and climb stairs, as well as to avoid obstacles during walking.

Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations