scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2011
TL;DR: This work proposes a data-driven approach to model the cell growth process and predict the cell confluency levels, signaling times to subculture, which has great potential as a tool for adaptive real-time control of subculturing and it can be integrated with robotic cell culture systems to achieve complete automation.
Abstract: Stem cell expansion culture aims to generate sufficient number of clinical-grade cells for cell-based therapies. One challenge for ex vivo expansion is to decide the appropriate time to perform subculture. Traditionally, this decision has been reliant on human estimation of cell confluency and predicting when confluency will approach a desired threshold. However, the use of human operators results in highly subjective decision-making and is prone to inter- and intra-operator variability. Using a real-time cell image analysis system, we propose a data-driven approach to model the cell growth process and predict the cell confluency levels, signaling times to subculture. This approach has great potential as a tool for adaptive real-time control of subculturing, and it can be integrated with robotic cell culture systems to achieve complete automation.

4 citations

Book
01 Jan 2009
TL;DR: The design and evaluation of ADAM is introduced, a system that uses machine-learning over network metadata derived from the sandboxed execution of webpage content at detecting malicious webpages and identifying the type of vulnerability using simple set of features as well.
Abstract: Malicious webpages are a prevalent and severe threat in the Internet security landscape. This fact has motivated numerous static and dynamic techniques to alleviate such threat. Building on this existing literature, this work introduces the design and evaluation of ADAM, a system that uses machine-learning over network metadata derived from the sandboxed execution of webpage content. ADAM aims at detecting malicious webpages and identifying the type of vulnerability using simple set of features as well. Machine-trained models are not novel in this problem space. Instead, it is the dynamic network artifacts (and their subsequent feature representations) collected during rendering that are the greatest contribution of this work. Using a real-world operational dataset that includes different type of malice behavior, our results show that dynamic cheap network artifacts can be used effectively to detect most types of vulnerabilities achieving an accuracy reaching 96 %. The system was also able to identify the type of a detected vulnerability with high accuracy achieving an exact match in 91 % of the cases. We identify the main vulnerabilities that require improvement, and suggest directions to extend this work to practical contexts.

4 citations

Proceedings Article
01 Jan 2007
TL;DR: Ethanol and/or acetaldehyde are produced by reacting at elevated temperature and pressure methanol with synthesis gas in the presence of a catalyst comprising a metal complex in which the metal is a metal of Group VIII of the Periodic Table other than iron, palladium and platinum and the ligand is derived from cyclopentadiene or a substituted cyclopents.
Abstract: Ethanol and/or acetaldehyde are produced by reacting at elevated temperature and pressure methanol with synthesis gas in the presence of a catalyst comprising a metal complex in which the metal is a metal of Group VIII of the Periodic Table other than iron, palladium and platinum and the ligand is derived from cyclopentadiene or a substituted cyclopentadiene and a promoter which is an iodide or a bromide. Optionally there is also added as a co-promoter a compound of formula X(A) (B) (C) wherein X is nitrogen, phosphorus, arsenic, antimony or bismuth and A, B and C are individually C1 to C20 monovalent hydrocarbyl groups which are free from aliphatic carbon-carbon unsaturation and are bound to the X atom by a carbon/X bond, or X is phosphorus, arsenic, antimony or bismuth and any two of A, B and C together form an organic divalent cyclic ring system bonded to the X atom, or X is nitrogen and all of A, B and C together form an organic trivalent cyclic ring system bonded to the X atom, e.g. triphenylphosphine.

3 citations

Journal ArticleDOI
TL;DR: The papers in this special section examine the concept of automated face analysis (AFA), which has received special attention from the computer vision and pattern recognition communities.
Abstract: The papers in this special section examine the concept of automated face analysis (AFA). AFA has received special attention from the computer vision and pattern recognition communities. Research progress often gives the impression that problems such as face recognition and face detection are solved, at least for some scenarios. Several aspects of face analysis remain open problems, including the implementation of large scale face recognition/detection methods for in the wild images, emotion recognition, micro-expression analysis, and others. The community keeps making rapid progress on these topics, with continual improvement of current methods and creation of new ones that push the state-of-the-art. Applications are countless, including security and video surveillance, human computer/robot interaction, communication, entertainment, and commerce, while having an important social impact in assistive technologies for education and health. The importance of face analysis, together with the vast amount of work on the subject and the latest developments in the field, motivated us to organize a special section on this theme. The scope of the compilation comprises all aspects of face analysis from a computer vision perspective. Including, but not limited to: recognition, detection, alignment, reconstruction of faces, pose estimation of faces, gaze analysis, age, emotion, gender, and facial attributes estimation, and applications among others.

3 citations

Proceedings ArticleDOI
05 Jan 2015
TL;DR: This work proposes an efficient two stage process - an intuitively constructed edge detection based algorithm to actively adjust facial contour landmark points, and a data driven validation system to filter out erroneous adjustments.
Abstract: Achieving sub-pixel accuracy with face alignment algorithms is a difficult task given the diversity of appearance in real world facial profiles. To capture variations in perspective, occlusion, and illumination with adequate precision, current face alignment approaches rely on detecting facial landmarks and iteratively adjusting deformable models that encode prior knowledge of facial structure. However, these methods involve optimization in latent sub-spaces, where user-specific face shape information is easily lost after dimensionality reduction. Attempting to retain this information to capture this wide range of variation requires a large training distribution, which is difficult to obtain without high computational complexity. Subsequently, many face alignment methods lack the pixel-level accuracy necessary to satisfy the aesthetic requirements of tasks such as face deidentification, face swapping, and face modeling. In many such applications, the primary source of aesthetic inadequacy is a misaligned jaw line or facial contour. In this work, we explore the idea of an image-based refinement method to fix the landmark points of a misaligned facial contour. We propose an efficient two stage process - an intuitively constructed edge detection based algorithm to actively adjust facial contour landmark points, and a data driven validation system to filter out erroneous adjustments. Experimental results show that state-of-the-art face alignment combined with our proposed post-processing method yields improved overall performance over multiple face image datasets.

3 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations