scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
01 Jan 1998
TL;DR: This paper presents a stereo algorithm for obtaining disparity maps with explicitly detected occlusion, and presents the processing results from synthetic and real image pairs, with comparison to results by other methods.
Abstract: This paper presents a stereo algorithm for obtaining disparity maps with explicitly detected occlusion. To produce smooth and detailed disparity maps, two assumptions that were originally proposed by Marr and Poggio are adopted: uniqueness and continuity. That is, the disparity maps have unique values and are continuous almost everywhere. A volumetric approach is taken to utilize these assumptions. A 3D array of match likelihood values is constructed with each value corresponding to a pixel in an image and a disparity relative to another image. An iterative algorithm updates the match likelihood values by diffusing support among neighboring values and inhibiting others. After the values have converged, the region of occlusion is explicitly detected. To demonstrate the effectiveness of the algorithm we present the processing results from synthetic and real image pairs, with comparison to results by other methods. The resulting disparity maps are smooth and detailed with occlusions detected. Disparity values in areas of repetitive texture are also found correctly.

18 citations

Journal ArticleDOI
23 Nov 2011-Langmuir
TL;DR: A microfluidic device with chemically patterned stripes of the cell adhesion molecule P-selectin was designed and the behavior of HL-60 cells rolling under flow was analyzed using a high-resolution visual tracking system.
Abstract: Cell separation technology is a key tool for biological studies and medical diagnostics that relies primarily on chemical labeling to identify particular phenotypes. An emergent method of sorting cells based on differential rolling on chemically patterned substrates holds potential benefits over existing technologies, but the underlying mechanisms being exploited are not well characterized. In order to better understand cell rolling on complex surfaces, a microfluidic device with chemically patterned stripes of the cell adhesion molecule P-selectin was designed. The behavior of HL-60 cells rolling under flow was analyzed using a high-resolution visual tracking system. This behavior was then correlated to a number of established predictive models. The combination of computational modeling and widely available fabrication techniques described herein represents a crucial step toward the successful development of continuous, label-free methods of cell separation based on rolling adhesion.

18 citations

Proceedings ArticleDOI
08 Nov 2000
TL;DR: The authors develop a method to reconstruct the specimen's optical properties over a three-dimensional volume which uses hierarchical representations of the specimen and data and test their algorithm by reconstructing the optical properties of known specimens.
Abstract: Differential Interference Contrast (DIC) microscopy is a powerful visualization tool used to study live biological cells. Its use, however, has been limited to qualitative observations. The inherent nonlinear relationship between the object properties and the image intensity makes quantitative analysis difficult. Towards quantitatively measuring optical properties of objects from DIC images, the authors develop a method to reconstruct the specimen's optical properties over a three-dimensional volume. The method is a nonlinear optimization which uses hierarchical representations of the specimen and data. As a necessary tool, the authors have developed and validated a computational model for the DIC image formation process. They test their algorithm by reconstructing the optical properties of known specimens.

18 citations

Proceedings ArticleDOI
14 Mar 2004
TL;DR: This paper proposes a system for quickly realizing a function for robustly detecting daily human activity events in handling objects in the real world and evaluates the robustness by comparing RANSAC with a least-squares optimization method.
Abstract: This paper proposes a system for quickly realizing a function for robustly detecting daily human activity events in handling objects in the real world. The system has three functions: 1) robustly measuring 3D positions of the objects; 2) quickly calibrating a system for measuring 3D positions of the objects; 3) quickly registering target activity events; and 4) robustly detecting the registered events in real time. As for 1), the system realizes robust measurement of 3D positions of the objects using an ultrasonic 3D tag system, which is a kind of a location sensor, and robust estimation algorithm known as random sample consensus (RANSAC). The paper evaluates the robustness by comparing RANSAC with a least-squares optimization method. As for 2), the system realizes quick calibration by a calibrating device having three or more ultrasonic transmitters. Quick calibration enables the system to be portable. As for 3), quick registration of target activity events is realized by a stereoscopic camera with ultrasonic 3D tags and interactive software for creating 3D shape model, creating virtual sensors based on the 3D shape model, and associating the virtual sensors with the target events. The system makes it possible to quickly create object-shaped sensors to which a new function for detecting activity events are added while maintaining the original functions of the objects.

18 citations

Book ChapterDOI
26 Oct 2005
TL;DR: With this system clinicians can easily map the motile left atrium shape and see where the catheter is inside it, therefore greatly improve the efficiency of the ablation operation.
Abstract: In this paper, we present a sensor guided ablation procedure of highly motile left atrium. It uses a system which automatically registers the 4D heart model with the position sensor on the catheter, and visualizes the heart model and the position of the catheter together in real time. With this system clinicians can easily map the motile left atrium shape and see where the catheter is inside it, therefore greatly improve the efficiency of the ablation operation.

18 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations