scispace - formally typeset
Search or ask a question
Author

Sergio Guadarrama

Bio: Sergio Guadarrama is an academic researcher from Google. The author has contributed to research in topics: Fuzzy logic & Fuzzy set operations. The author has an hindex of 34, co-authored 68 publications receiving 35677 citations. Previous affiliations of Sergio Guadarrama include Technical University of Madrid & University of California, Berkeley.


Papers
More filters
Proceedings ArticleDOI
31 Aug 2012
TL;DR: It is claimed that a new epistemological model based on fuzzy logic can open the scope to understanding sex in a non-dichotomic and fixed way.
Abstract: The idea that sex (the male/female binary) is a clear-cut category distinction is well established in our culture. However, people born with ambiguous genitalia or with a mixture of male and female anatomy have always existed, originally known as hermaphrodites and, later on, as intersex. Along the centuries, human cultures have classified sexual differences in different ways, though the dichotomical model of the two sexes has been the most prevalent. The main objective of this paper is to show how different epistemological models or frameworks have a direct impact on the way of thinking about sexual difference and we claim that a new epistemological model based on fuzzy logic can open the scope to understanding sex in a non-dichotomic and fixed way.

2 citations

Proceedings Article
25 Jan 2015
TL;DR: In this article, the Optimal Roundness Criterion (ORC) is proposed as a novel stopping criterion for sparse filtering, which is related with pre-processing procedures such as Statistical Whitening and demonstrate that it can make image classification with sparse filtering considerably faster and more accurate.
Abstract: Sparse Filtering is a popular feature learning algorithm for image classification pipelines. In this paper, we connect the performance of Sparse Filtering with spectral properties of the corresponding feature matrices. This connection provides new insights into Sparse Filtering; in particular, it suggests early stopping of Sparse Filtering. We therefore introduce the Optimal Roundness Criterion (ORC), a novel stopping criterion for Sparse Filtering. We show that this stopping criterion is related with pre-processing procedures such as Statistical Whitening and demonstrate that it can make image classification with Sparse Filtering considerably faster and more accurate.

2 citations

Book ChapterDOI
01 Jan 2006
TL;DR: This paper just tries to stimulate some reflection to extend the current theories of fuzzy sets to wider areas of language, with the objective of reaching a better knowledge of the links between language and its representation by means of fuzzy set, when possible.
Abstract: This paper just tries to stimulate some reflection to extend the current theories of fuzzy sets to wider areas of language, with the objective of reaching a better knowledge of the links between language and its representation by means of fuzzy sets, when possible. This for the progress of computing with words that, sooner or later, will pose the theoretically challenging, and practically important, problem of the linguistic credit, or soundness in language, of the theories of fuzzy sets. A problem that fuzzy logic cannot avoid to become a basic representation’s tool for computing with words. To this end, the strategy of reconsidering the current knowledge of fuzzy sets theories does not seem far from scope.

1 citations

Proceedings Article
01 Jan 2009
TL;DR: To evaluate premises, consequences and hypotheses, relevance and support ratios are defined for each of them to distinguish consequences based on the num- ber of premises that support them, and also to reduce the set of premises while maintaining the same consequences.
Abstract: To evaluate premises, consequences and hypotheses, on this paper relevance and support ratios are defined for each of them. This allows to distinguish consequences based on the num- ber of premises that support them, and also to reduce the set of premises while maintaining the same consequences. Since the re- lation between premises and hypotheses is, in some sense, similar to the relation between consequences and premises, analogous ratios are defined for hypotheses and premises. Keywords— Conjectures, Consequences, Hypotheses, Relevance, Support.

1 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Abstract: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.

30,811 citations