scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
01 Jan 2001
TL;DR: This thesis presents a system for recovering the position and orientation of the target anatomy in 3D space based on iterative comparison of 2D planar radiographs with preoperative CT data, and uses X-ray images acquired at the time of treatment, and iteratively compares them with synthetic images, known as Digitally Reconstructed Radiographs (DRRs), in order to estimate the position.
Abstract: Recent years have seen exciting advances in Computer Assisted Surgery (CAS). CAS systems are currently in use which provide data to the surgeon, provide passive feedback and motion constraint, and even automate parts of the surgery by manipulating cutters and endoscopic cameras. For most of these systems, accurate registration between the patient's anatomy and the CAS system is crucial: if the position of the surgical target is not known with sufficient accuracy, therapies cannot be applied precisely, and treatment efficacy falls. This thesis presents a system for recovering the position and orientation of the target anatomy in 3D space based on iterative comparison of 2D planar radiographs with preoperative CT data. More specifically, this system uses X-ray images acquired at the time of treatment, and iteratively compares them with synthetic images, known as Digitally Reconstructed Radiographs (DRRs), in order to estimate the position and orientation of the target anatomy. An intermediate data representation called a Transgraph is presented. The Transgraph is similar to the Lumigraph, or Light Field, and extends the computer graphics field called image-based rendering to transmission imaging. This representation speeds up computation of DRRs by over an order of magnitude compared to ray-casting techniques, without the use of special graphics hardware. A hardware based volume rendering technique is also presented. This approach is based on new texture mapping techniques which enable DRR generation using off the shelf consumer grade computer graphics hardware. These techniques permit computation of full resolution (512 x 512) DRRs based on 256 x 256 x 256 CT data in roughly 70 ms. The registration system is evaluated for application to frameless stereotactic radiosurgery, and phantom studies are presented demonstrating accuracy comparable to state of the art immobilization systems. Additional phantom studies are presented in which the registration system is used to measure implant orientation following total hip replacement surgery, improving on current practice by a factor of two.

83 citations

Patent
21 Aug 1996
TL;DR: In this article, a method for merging real and synthetic images in real-time is proposed, which is comprised of the steps of providing a first signal containing depth and image information per pixel about a real image.
Abstract: A method for merging real and synthetic images in real time is comprised of the steps of providing a first signal containing depth and image information per pixel about a real image. A second signal containing depth and image information per pixel about a synthetic image is provided. The depth information corresponding to the real image and the depth information corresponding to the synthetic image for each pixel are compared. Based on the comparison, either the image information corresponding to the real image or the image information corresponding to the synthetic image is selected and combined. Because the image information is compared based on depth, any interaction such as occluding, shadowing, reflecting, or colliding can be determined and an appropriate output generated

83 citations

Book ChapterDOI
26 Sep 2004
TL;DR: Three users used each of three PFS prototype concepts to cut a faceted shape in wax and the results of this experiment were analyzed to identify the largest sources of error.
Abstract: The Precision Freehand Sculptor (PFS) is a compact, handheld, intelligent tool to assist the surgeon in accurately cutting bone. A retractable rotary blade on the PFS allows a computer to control what bone is removed. Accuracy is ensured even though the surgeon uses the tool freehand. The computer extends or retracts the blade based on data from an optical tracking camera. Three users used each of three PFS prototype concepts to cut a faceted shape in wax. The results of this experiment were analyzed to identify the largest sources of error.

83 citations

Proceedings ArticleDOI
01 Dec 1984
TL;DR: The concept of customizing the N-E Euler (N-E) algorithm for real-time applications by reducing the computational requirements of the general-purpose algorithm for designs incorporating kinematic and dynamic parameter simplifications is proposed.
Abstract: Real-time control of fast manipulators requires efficient control algorithms to achieve high sampling rates Practical implementation of the inverse dynamics to achieve high sampling rates demands an efficient algorithm which utilizes the capabilities of modern digital hardware To reduce the computational requirements of the Newton Euler (N-E) algorithm for real-time applications, we propose the concept of customizing the algorithm for specific manipulator designs We analyze the computational requirements of the algorithm for designs incorporating kinematic and dynamic parameter simplifications We illustrate our approach by customizing the NE algorithm for the CMU DD Arm II (the second version of the CMU direct drive arm) The customized algorithm reduces the computational requirements of the general-purpose algorithm by 56 percent We also describe the hardware system for real-time control of the CMU DD Arm II and the implementation of our customized algorithm on a Marinco processor and highlight its impact on Manipulator control

83 citations

Journal ArticleDOI
01 Apr 1989
TL;DR: It is shown that the computed-torque scheme outperforms the independent joint control scheme as long as there is not torque saturation in the actuators and the importance of compensating for the nonlinear Coriolis and centrifugal forces even at low speeds of operation is established.
Abstract: Experimental results on the real-time performance of model-based control algorithms are presented. The computed-torque scheme which utilizes the complete dynamics model of the manipulator was compared to the independent joint control scheme, which assumes a decoupled and linear model of the manipulator dynamics. The two manipulator control schemes have been implemented on the Carnegie-Mellon University DD (direct-drive) Arm II with a sampling period of 2 ms. The authors discuss the design of controller gains for both the computed-torque and the independent joint control schemes and establish a framework for comparing their trajectory-tracking performances. It is shown that the computed-torque scheme outperforms the independent joint control scheme as long as there is not torque saturation in the actuators. Based on the experimental results, the authors conclusively establish the importance of compensating for the nonlinear Coriolis and centrifugal forces even at low speeds of operation. >

83 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations