scispace - formally typeset
Topic

Normalization (statistics)

About: Normalization (statistics) is a(n) research topic. Over the lifetime, 11224 publication(s) have been published within this topic receiving 357932 citation(s).

...read more

Papers
  More

Open accessProceedings ArticleDOI: 10.1109/CVPR.2005.177
Navneet Dalal1, Bill Triggs1Institutions (1)
20 Jun 2005-
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

...read more

Topics: Histogram of oriented gradients (62%), Local binary patterns (57%), GLOH (56%) ...read more

28,803 Citations


Open accessProceedings Article
Sergey Ioffe1, Christian Szegedy1Institutions (1)
06 Jul 2015-
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

...read more

23,723 Citations


Open accessPosted Content
Sergey Ioffe1, Christian Szegedy1Institutions (1)
11 Feb 2015-arXiv: Learning
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

...read more

17,151 Citations


Open accessJournal ArticleDOI: 10.1186/GB-2002-3-7-RESEARCH0034
Jo Vandesompele1, Katleen De Preter1, Filip Pattyn1, Bruce Poppe1  +3 moreInstitutions (1)
18 Jun 2002-Genome Biology
Abstract: Gene-expression analysis is increasingly important in biological research, with real-time reverse transcription PCR (RT-PCR) becoming the method of choice for high-throughput and accurate expression profiling of selected genes. Given the increased sensitivity, reproducibility and large dynamic range of this methodology, the requirements for a proper internal control gene for normalization have become increasingly stringent. Although housekeeping gene expression has been reported to vary considerably, no systematic survey has properly determined the errors related to the common practice of using only one control gene, nor presented an adequate way of working around this problem. We outline a robust and innovative strategy to identify the most stably expressed control genes in a given set of tissues, and to determine the minimum number of genes required to calculate a reliable normalization factor. We have evaluated ten housekeeping genes from different abundance and functional classes in various human tissues, and demonstrated that the conventional use of a single gene for normalization leads to relatively large errors in a significant proportion of samples tested. The geometric mean of multiple carefully selected housekeeping genes was validated as an accurate normalization factor by analyzing publicly available microarray data. The normalization strategy presented here is a prerequisite for accurate RT-PCR expression profiling, which, among other things, opens up the possibility of studying the biological relevance of small expression differences.

...read more

Topics: Reference genes (62%), Normalization (statistics) (60%), Housekeeping gene (58%) ...read more

16,784 Citations


Reference EntryDOI: 10.1002/0470013192.BSA501
Ian T. Jolliffe1Institutions (1)
15 Oct 2005-
Abstract: When large multivariate datasets are analyzed, it is often desirable to reduce their dimensionality. Principal component analysis is one technique for doing this. It replaces the p original variables by a smaller number, q, of derived variables, the principal components, which are linear combinations of the original variables. Often, it is possible to retain most of the variability in the original variables with q very much smaller than p. Despite its apparent simplicity, principal component analysis has a number of subtleties, and it has many uses and extensions. A number of choices associated with the technique are briefly discussed, namely, covariance or correlation, how many components, and different normalization constraints, as well as confusion with factor analysis. Various uses and extensions are outlined. Keywords: dimension reduction; factor analysis; multivariate analysis; variance maximization

...read more

Topics: Principal component analysis (64%), Sparse PCA (62%), Dimensionality reduction (57%) ...read more

14,631 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021749
2020822
2019823
2018706
2017569

Top Attributes

Show by:

Topic's top 5 most impactful authors

Ping Luo

13 papers, 345 citations

Srinivasan Umesh

12 papers, 159 citations

Rob van der Goot

10 papers, 120 citations

Eduardo Lleida

9 papers, 115 citations

Antonio Miguel

9 papers, 115 citations

Network Information
Related Topics (5)
Statistical model

19.9K papers, 904.1K citations

88% related
Smoothing

36.3K papers, 942.4K citations

88% related
Wavelet

78K papers, 1.3M citations

88% related
Filter (signal processing)

81.4K papers, 1M citations

87% related
Deep learning

79.8K papers, 2.1M citations

87% related