scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
DissertationDOI
01 Jan 2011
TL;DR: In this article, the effect of water drops in videos via spatio-temporal frequency analysis and in real life, by using a projector to illuminate everything except the drops is discussed.
Abstract: Water drops are present throughout our daily lives. Microscopic droplets create fog and mist, and large drops fall as rain. Because of their shape and refractive properties, water drops exhibit a wide variety of visual effects. If not directly illuminated by a light source, they are difficult to see. But if they are directly illuminated, they can become the brightest objects in the environment. This thesis has two main components. First, we will show how to create two- and three-dimensional displays using water drops and a projector. Water drops act as tiny spherical lenses, refracting light into a wide angle. To a person viewing an illuminated drop, it will appear that the drop is the same color as the incident light ray. Using a valve assembly, we will fill a volume with non-occluding water drops. At any instant in time, no ray from the projector will intersect with two drops. Using a camera, we will detect the drops locations, then illuminate them with the projector. The final result is a programmable, dynamic, and three-dimensional display. Second, we will show how to reduce the effect of water drops in videos via spatio-temporal frequency analysis, and in real life, by using a projector to illuminate everything except the drops. To remove rain (and snow) from videos, we will use a streak model in frequency space to find the frequencies corresponding to rain and snow in the video. These frequencies can then be suppressed to reduce the effect of rain and snow. We will also suppress the visual effect of water drops by selectively "missing" them by not illuminating them with a projector. In light rain, this can be performed by tracking individual drops. This kind of drop-avoiding light source could be used for many nighttime applications, such as car headlights.

5 citations

Book
03 Jan 1992
TL;DR: In this paper, the Dichromatic Reflection model is used to model the relationship between highlights and color in images of dielectrics such as plastic and painted surfaces, which gives rise to a mathematical relationship in color space to separate highlights from object color.
Abstract: Research in early (low-level) vision, tooth for machines and humans, has traditionally been based on the study of idealized images or image patches such as step edges, gratings, flat fields, and Mondrians. Real images, however, exhibit much richer and more complex structure, whose nature is determined by the physical and geometric properties of illumination, reflection, and imaging. By understanding these physical relationships, a new kind of early vision analysis is made possible. In this paper, we describe a progression of models of imaging physics that present a much more complex and realistic set of image relationships than are commonly assumed in early vision research. We begin with the Dichromatic Reflection Model, which describes how highlights and color are related in images of dielectrics such as plastic and painted surfaces. This gives rise to a mathematical relationship in color space to separate highlights from object color. Perceptions of shape, surface roughness/texture, and illumination color are readily derived from this analysis. We next show how this can be extended to images of several objects, by deriving local color variation relationships from the basic model. The resulting method for color image analysis has been successfully applied in machine vision experiments in our laboratory. Yet another extension is to account for inter-reflection among multiple objects. We have derived a simple model of color inter-reflection that accounts for the basic phenomena, and report on this model and how we are applying it. In general, the concept of illumination for vision should account for the entire "illumination environment", rather than being restricted to a single light source. This work shows that the basic physical relationships give rise to very structured image properties, which can be a more valid basis for early vision than the traditional idealized image patterns.

4 citations

Journal ArticleDOI
TL;DR: This work illustrates the point that sensors really become smart when the tight integration of sensing and processing results in an adaptive sensing system that can react to environmental conditions and consistently deliver useful measurements to a robotic system even under the harshest of the conditions.
Abstract: When a sensor device is packaged together with a CPU, it is called a “smart sensor.” The sensors really become smart when the tight integration of sensing and processing results in an adaptive sensing system that can react to environmental conditions and consistently deliver useful measurements to a robotic system even under the harshest of the conditions. We illustrate this point with an example from our recent work on illumination‐adaptive algorithm for dynamic range compression that is well suited for an on‐chip implementation resulting in a truly smart image sensor. Our method decides on the tonal mapping for each pixel based on the signal content in pixel's local neighborhood.

4 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations