Topic

# Pooling

About: Pooling is a(n) research topic. Over the lifetime, 5583 publication(s) have been published within this topic receiving 161394 citation(s).

##### Papers
More filters

Journal ArticleDOI
TL;DR: Mice adds new functionality for imputing multilevel data, automatic predictor selection, data handling, post-processing imputed values, specialized pooling routines, model selection tools, and diagnostic graphs.

Abstract: The R package mice imputes incomplete multivariate data by chained equations. The software mice 1.0 appeared in the year 2000 as an S-PLUS library, and in 2001 as an R package. mice 1.0 introduced predictor selection, passive imputation and automatic pooling. This article documents mice, which extends the functionality of mice 1.0 in several ways. In mice, the analysis of imputed data is made completely general, whereas the range of models under which pooling works is substantially extended. mice adds new functionality for imputing multilevel data, automatic predictor selection, data handling, post-processing imputed values, specialized pooling routines, model selection tools, and diagnostic graphs. Imputation of categorical data is improved in order to bypass problems caused by perfect prediction. Special attention is paid to transformations, sum scores, indices and interactions using passive imputation, and to the proper setup of the predictor matrix. mice can be downloaded from the Comprehensive R Archive Network. This article provides a hands-on, stepwise approach to solve applied incomplete data problems.

7,115 citations

Journal ArticleDOI
01 Jan 1978-Econometrica

4,335 citations

Journal ArticleDOI
M. Hashem Pesaran1, Ron Smith2Institutions (2)
Abstract: In panel data four procedures are widely used: pooling, aggregating, averaging group estimates, and cross-section regression. In the static case, if the coefficients differ randomly, all four procedures give unbiased estimates of coefficient means. In the dynamic case, when the coefficients differ across groups, pooling and aggregating give inconsistent and potentially highly misleading estimates of the coefficients, though the cross-section can provide consistent estimates of the long-run effects. The theoretical results on the properties of the four procedures are illustrated by UK labour demand functions for 38 industries over 30 years.

3,981 citations

Book ChapterDOI
Kaiming He1, Xiangyu Zhang2, Shaoqing Ren3, Jian Sun1Institutions (3)
06 Sep 2014-
TL;DR: This work equips the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement, and develops a new network structure, called SPP-net, which can generate a fixed-length representation regardless of image size/scale.

Abstract: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.

3,854 citations

Journal ArticleDOI
Kaiming He1, Xiangyu Zhang2, Shaoqing Ren3, Jian Sun1Institutions (3)
TL;DR: This work equips the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement, and develops a new network structure, called SPP-net, which can generate a fixed-length representation regardless of image size/scale.

Abstract: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 $\times$ 224) input image. This requirement is “artificial” and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 $\times$ faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

3,685 citations

##### Network Information
###### Related Topics (5)
Ranking

18.7K papers, 434K citations

87% related
Bayesian probability

26.5K papers, 817.9K citations

86% related
Prior probability

14.8K papers, 428.9K citations

85% related
Exploit

3.1K papers, 56.5K citations

85% related
Decision tree

26.1K papers, 588.1K citations

84% related
##### Performance
###### Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021563
2020588
2019522
2018421
2017342

###### Top Attributes

Show by:

Topic's top 5 most impactful authors

Massimiliano Marcellino

16 papers, 505 citations

Enrique F. Schisterman

12 papers, 254 citations

Yudong Zhang

11 papers, 331 citations

Shuihua Wang

8 papers, 226 citations

Chunhua Shen

7 papers, 471 citations