scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Book
01 Jan 2009
TL;DR: This paper considers the web service business protocol synthesis problem, i.e., the automated construction of a new target protocol by reusing some existing ones by using them as templates for new protocols.
Abstract: In this paper, we consider the web service business protocol synthesis problem, i.e., the automated construction of a new target protocol by reusing some existing ones. We review recent research works and challenges and discuss the associated computational problems both in bounded and unbounded settings.

7 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This final chapter investigates how much extra information is actually added by having more than one image for super- resolution and proposes a super-resolution algorithm which uses a completely different source of information, in addition to the reconstruction constraints.
Abstract: A variety of super-resolution algorithms have been described in this book. Most of them are based on the same source of information however; that the super-resolution image should generate the lower resolution input images when appropriately warped and down-sampled to model image formation. (This information is usually incorporated into super-resolution algorithms in the form of reconstruction constraints which are frequently combined with a smoothness prior to regularize their solution.) In this final chapter, we first investigate how much extra information is actually added by having more than one image for super-resolution. In particular, we derive a sequence of analytical results which show that the reconstruction constraints provide far less useful information as the decimation ratio increases. We validate these results empirically and show that for large enough decimation ratios any smoothness prior leads to overly smooth results with very little high-frequency content however many (noiseless) low resolution input images are used. In the second half of this chapter, we propose a super-resolution algorithm which uses a completely different source of information, in addition to the reconstruction constraints. The algorithm recognizes local “features” in the low resolution images and then enhances their resolution in an appropriate manner, based on a collection of high and low-resolution training samples. We call such an algorithm a hallucination algorithm.

7 citations

Book ChapterDOI
TL;DR: This paper proposes a new approach to distributed quadtree processing using a task queue mechanism, and discusses dynamic load balancing and related issues in the context of distributed quad tree processing, and provides possible solutions.
Abstract: Quadtrees have been widely used in computer vision, spatial database, and related area due to their compactness and regularity. It has long been claimed that quadtree related algorithms are suitable for parallel and distributed implementation, but only little work has been done to justify this claim. The simple input partitioning method used in low level image processing could not be equally applied to distributed quadtree processing since it suffers the problem of load imbalance. Load balancing is one of the most crucial issues in distributed processing. In the context of distributed quadtree processing, it appears at various stages of processing in different forms; each requires its own solutions. The diversity in approaches to load balancing is further multiplied by the differences in the characteristics of types of data represented by,and spatial operations performed on quadtrees. In this paper, we propose a new approach to distributed quadtree processing using a task queue mechanism. We discuss dynamic load balancing and related issues in the context of distributed quadtree processing, and provide possible solutions. The proposed algorithms have been implemented on the Nectar system (currently being developed at Carnegie Mellon). Experimental results are also included in the paper.

7 citations

Journal Article
TL;DR: This work proposes a method to distinguish object types using structure-based features described by a Gaussian mixture model, and demonstrates that it can obtain higher classification performance when using both conventional and structure- based features together than when using either alone.
Abstract: Current feature-based object type classification methods information of texture and shape based information derived from image patches. Generally, input features, such as the aspect ratio, are derived from rough characteristics of the entire object. However, we derive input features from a parts-based representation of the object. We propose a method to distinguish object types using structure-based features described by a Gaussian mixture model. This approach uses Gaussian fitting onto foreground pixels detected by background subtraction to segment an image patch into several sub-regions, each of which is related to a physical part of the object. The object is modeled as a graph, where the nodes contain SIFT(Scale Invariant Feature Transform) information obtained from the corresponding segmented regions, and the edges contain information on distance between two connected regions. By calculating the distance between the reference and input graphs, we can use a k-NN-based classifier to classify an object as one of the following: single human, human group, bike, or vehicle. We demonstrate that we can obtain higher classification performance when using both conventional and structure-based features together than when using either alone.

7 citations

01 Jan 1999
TL;DR: This work represents anatomical variations in the form of statistical models, and embed these statistics into a 3-D digital brain atlas which is built by registering a training set of brain MRI volumes with the atlas.
Abstract: Registration between 3-D images of human anatomies enables cross-subject diagnosis. However, innate differences in the appearance and location of anatomical structures between individuals make accurate registration difficult. We characterize such anatomical variations to achieve accurate registration. We represent anatomical variations in the form of statistical models, and embed these statistics into a 3-D digital brain atlas which we use as a reference. These models are built by registering a training set of brain MRI volumes with the atlas. This associates each voxel in the atlas with multi-dimensional distributions of variations in intensity and geometry of the training set. We evaluate statistical properties of these distributions to build a statistical atlas. When we register the statistical atlas with a particular subject, the embedded statistics function as prior knowledge to guide the deformation process. This allows the deformation to tolerate variations between individuals while retaining discrimination between different structures. This method gives an overall voxel mis-classification rate of 2.9% on 40 test cases; this is a 34% error reduction over the performance of our previous algorithm without using anatomical knowledge. Besides achieving accurate registration, statistical models of anatomical variations also enable quantitative study of anatomical differences between populations.

7 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations