scispace - formally typeset
Search or ask a question

Showing papers on "Pooling published in 2014"


Book ChapterDOI
06 Sep 2014
TL;DR: This work equips the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement, and develops a new network structure, called SPP-net, which can generate a fixed-length representation regardless of image size/scale.
Abstract: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.

3,945 citations


Proceedings ArticleDOI
08 Apr 2014
TL;DR: A convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) is described that is adopted for the semantic modelling of sentences and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations.
Abstract: The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline.

3,476 citations


Book ChapterDOI
TL;DR: SPP-Net as mentioned in this paper proposes a spatial pyramid pooling strategy, which can generate a fixed-length representation regardless of image size/scale, and achieves state-of-the-art performance in object detection.
Abstract: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224x224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102x faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

2,304 citations


Posted Content
TL;DR: A novel architecture which includes an efficient `position refinement' model that is trained to estimate the joint offset location within a small region of the image to achieve improved accuracy in human joint location estimation is introduced.
Abstract: Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient `position refinement' model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC dataset and outperforms all existing approaches on the MPII-human-pose dataset.

877 citations


Posted Content
TL;DR: This paper proposed the Dynamic Convolutional Neural Network (DCNN) which uses dynamic k-max pooling, a global pooling operation over linear sequences, to model sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations.
Abstract: The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline.

640 citations


Posted Content
TL;DR: The form of fractional max-pooling formulated is found to reduce overfitting on a variety of datasets: for instance, it improves on the state of the art for CIFAR-100 without even using dropout.
Abstract: Convolutional networks almost always incorporate some form of spatial pooling, and very often it is max-pooling with = 2. Max-pooling act on the hidden layers of the network, reducing their size by an integer multiplicative factor . The amazing by-product of discarding 75% of your data is that you build into the network a degree of invariance with respect to translations and elastic distortions. However, if you simply alternate convolutional layers with max-pooling layers, performance is limited due to the rapid reduction in spatial size, and the disjoint nature of the pooling regions. We have formulated a fractional version of maxpooling where is allowed to take non-integer values. Our version of max-pooling is stochastic as there are lots of dierent ways of constructing suitable pooling regions. We find that our form of fractional max-pooling reduces overfitting on a variety of datasets: for instance, we improve on the state of the art for CIFAR-100 without even using dropout.

439 citations


Journal ArticleDOI
TL;DR: The decision to calculate a summary estimate in a meta-analysis should be based on clinical judgment, the number of studies, and the degree of variation among studies, as well as on a random-effects model that incorporates study-to-study variability beyond what would be expected by chance.
Abstract: A primary goal of meta-analysis is to improve the estimation of treatment effects by pooling results of similar studies. This article discusses the problems associated with using the DerSimonian–La...

353 citations


Book ChapterDOI
24 Oct 2014
TL;DR: A novel feature pooling method is proposed to regularize CNNs, which replaces the deterministic pooling operations with a stochastic procedure by randomly using the conventional max pooling and average pooling methods.
Abstract: Convolutional Neural Network (CNN) is a biologically inspired trainable architecture that can learn invariant features for a number of applications. In general, CNNs consist of alternating convolutional layers, non-linearity layers and feature pooling layers. In this work, a novel feature pooling method, named as mixed pooling, is proposed to regularize CNNs, which replaces the deterministic pooling operations with a stochastic procedure by randomly using the conventional max pooling and average pooling methods. The advantage of the proposed mixed pooling method lies in its wonderful ability to address the over-fitting problem encountered by CNN generation. Experimental results on three benchmark image classification datasets demonstrate that the proposed mixed pooling method is superior to max pooling, average pooling and some other state-of-the-art works known in the literature.

328 citations


Proceedings ArticleDOI
Naila Murray1, Florent Perronnin1
23 Jun 2014
TL;DR: This work proposes a novel pooling mechanism that achieves the same effect as max-pooling but is applicable beyond the BOV and especially to the state-of-the-art Fisher Vector -- hence the name Generalized Max Pooling (GMP).
Abstract: State-of-the-art patch-based image representations involve a pooling operation that aggregates statistics computed from local descriptors. Standard pooling operations include sum- and max-pooling. Sum-pooling lacks discriminability because the resulting representation is strongly influenced by frequent yet often uninformative descriptors, but only weakly influenced by rare yet potentially highly-informative ones. Max-pooling equalizes the influence of frequent and rare descriptorsbut is only applicable to representations that rely on count statistics, such as the bag-of-visual-words (BOV)and its soft- and sparse-coding extensions. We propose a novel pooling mechanism that achieves the same effect as max-pooling but is applicable beyond the BOV and especially to the state-of-the-art Fisher Vector --hence the name Generalized Max Pooling (GMP). It involves equalizing the similarity between each patch and the pooled representation, which is shown to be equivalent to re-weighting the per-patch statistics. We show on five public image classification benchmarks that the proposedGMP can lead to significant performance gains with respect toheuristic alternatives.

203 citations


Book ChapterDOI
06 Sep 2014
TL;DR: This work proposes a simple yet effective approach to the problem of pedestrian detection which outperforms the current state-of-the-art and directly optimises the partial area under the ROC curve (pAUC) measure, which concentrates detection performance in the range of most practical importance.
Abstract: We propose a simple yet effective approach to the problem of pedestrian detection which outperforms the current state-of-the-art. Our new features are built on the basis of low-level visual features and spatial pooling. Incorporating spatial pooling improves the translational invariance and thus the robustness of the detection process. We then directly optimise the partial area under the ROC curve (pAUC) measure, which concentrates detection performance in the range of most practical importance. The combination of these factors leads to a pedestrian detector which outperforms all competitors on all of the standard benchmark datasets. We advance state-of-the-art results by lowering the average miss rate from 13% to 11% on the INRIA benchmark, 41% to 37% on the ETH benchmark, 51% to 42% on the TUD-Brussels benchmark and 36% to 29% on the Caltech-USA benchmark.

163 citations


Journal ArticleDOI
01 Jan 2014-Genetics
TL;DR: An adaptive SPU (aSPU) test is proposed to approximate the most powerful SPU test for a given scenario, consequently maintaining high power and being highly adaptive across various scenarios.
Abstract: This article focuses on conducting global testing for association between a binary trait and a set of rare variants (RVs), although its application can be much broader to other types of traits, common variants (CVs), and gene set or pathway analysis. We show that many of the existing tests have deteriorating performance in the presence of many nonassociated RVs: their power can dramatically drop as the proportion of nonassociated RVs in the group to be tested increases. We propose a class of so-called sum of powered score (SPU) tests, each of which is based on the score vector from a general regression model and hence can deal with different types of traits and adjust for covariates, e.g., principal components accounting for population stratification. The SPU tests generalize the sum test, a representative burden test based on pooling or collapsing genotypes of RVs, and a sum of squared score (SSU) test that is closely related to several other powerful variance component tests; a previous study (Basu and Pan 2011) has demonstrated good performance of one, but not both, of the Sum and SSU tests in many situations. The SPU tests are versatile in the sense that one of them is often powerful, although its identity varies with the unknown true association parameters. We propose an adaptive SPU (aSPU) test to approximate the most powerful SPU test for a given scenario, consequently maintaining high power and being highly adaptive across various scenarios. We conducted extensive simulations to show superior performance of the aSPU test over several state-of-the-art association tests in the presence of many nonassociated RVs. Finally we applied the SPU and aSPU tests to the GAW17 mini-exome sequence data to compare its practical performance with some existing tests, demonstrating their potential usefulness.

Posted Content
TL;DR: In this article, a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations, was introduced.
Abstract: We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This work addresses the false response influence problem when learning and applying discriminative parts to construct the mid-level representation in scene classification by learning important spatial pooling regions along with their appearance and achieves state-of-the-art performance on several datasets.
Abstract: We address the false response influence problem when learning and applying discriminative parts to construct the mid-level representation in scene classification. It is often caused by the complexity of latent image structure when convolving part filters with input images. This problem makes mid-level representation, even after pooling, not distinct enough to classify input data correctly to categories. Our solution is to learn important spatial pooling regions along with their appearance. The experiments show that this new framework suppresses false response and produces improved results on several datasets, including MIT-Indoor, 15-Scene, and UIUC 8-Sport. When combined with global image features, our method achieves state-of-the-art performance on these datasets.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This paper provides evidence for a positive answer for 3D pose estimation from RGB images by leveraging 2D human body part labeling in images, second-order label-sensitive pooling over dynamically computed regions resulting from a hierarchical decomposition of the body, and iterative structured-output modeling to contextualize the process based on3D pose estimates.
Abstract: Recently, the emergence of Kinect systems has demonstrated the benefits of predicting an intermediate body part labeling for 3D human pose estimation, in conjunction with RGB-D imagery. The availability of depth information plays a critical role, so an important question is whether a similar representation can be developed with sufficient robustness in order to estimate 3D pose from RGB images. This paper provides evidence for a positive answer, by leveraging (a) 2D human body part labeling in images, (b) second-order label-sensitive pooling over dynamically computed regions resulting from a hierarchical decomposition of the body, and (c) iterative structured-output modeling to contextualize the process based on 3D pose estimates. For robustness and generalization, we take advantage of a recent large-scale 3D human motion capture dataset, Human3.6M[18] that also has human body part labeling annotations available with images. We provide extensive experimental studies where alternative intermediate representations are compared and report a substantial 33% error reduction over competitive discriminative baselines that regress 3D human pose against global HOG features.

Journal ArticleDOI
TL;DR: This paper investigates the behaviors of single indicator models, forecast combinations and factor models, in a pseudo real-time framework, and discusses their performances for nowcasting the quarterly growth rate of the Euro area GDP and its components, using a very large set of monthly indicators.

Journal ArticleDOI
TL;DR: This work proposes a novel framework with spatial pooling of complementary features for combining texture and edge-based local features together at the descriptor extraction level and shows the superior performance of the algorithm over the state-of-the-art methods.
Abstract: In image classification tasks, one of the most successful algorithms is the bag-of-features (BoFs) model. Although the BoF model has many advantages, such as simplicity, generality, and scalability, it still suffers from several drawbacks, including the limited semantic description of local descriptors, lack of robust structures upon single visual words, and missing of efficient spatial weighting. To overcome these shortcomings, various techniques have been proposed, such as extracting multiple descriptors, spatial context modeling, and interest region detection. Though they have been proven to improve the BoF model to some extent, there still lacks a coherent scheme to integrate each individual module together. To address the problems above, we propose a novel framework with spatial pooling of complementary features. Our model expands the traditional BoF model on three aspects. First, we propose a new scheme for combining texture and edge-based local features together at the descriptor extraction level. Next, we build geometric visual phrases to model spatial context upon complementary features for midlevel image representation. Finally, based on a smoothed edgemap, a simple and effective spatial weighting scheme is performed to capture the image saliency. We test the proposed framework on several benchmark data sets for image classification. The extensive results show the superior performance of our algorithm over the state-of-the-art methods.

Journal ArticleDOI
TL;DR: Experimental results show that the propose method significantly outperformed widely used SIFT feature and the winner of ICPR contest 2012, and encouraging 100% image level accuracy was achieved on the SZU dataset.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: Experimental results show that without training on human opinion scores the proposed method is comparable to state-of-the-art NR-VQA algorithms.
Abstract: In this paper, we propose a novel “Opinion Free” (OF) No-Reference Video Quality Assessment (NR-VQA) algorithm based on frame-level unsupervised feature learning and hysteresis temporal pooling. The system consists of three components: feature extraction with max-min pooling, frame quality prediction and temporal pooling. Frame level features are first extracted by unsupervised feature learning and used to train a linear Support Vector Regressor (SVR) for predicting quality scores frame by frame. Frame-level quality scores are then combined by temporal pooling to obtain a single video quality score. We tested the proposed method on the LIVE video quality database and experimental results show that without training on human opinion scores the proposed method is comparable to state-of-the-art NR-VQA algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors show that combining five of the leading forecasting models with equal weights dominates the strategy of selecting one model and using it for all horizons up to two years.

Journal ArticleDOI
TL;DR: The use of pooled samples is advantageous as it saves significantly on analytical costs, may reduce the time and resources required for recruitment and, in certain circumstances, allows quantification of samples approaching the limit of detection as mentioned in this paper.
Abstract: Biomonitoring has become the “gold standard” in assessing chemical exposures, and has an important role in risk assessment. The pooling of biological specimens—combining multiple individual specimens into a single sample—can be used in biomonitoring studies to monitor levels of exposure and identify exposure trends or to identify susceptible populations in a cost-effective manner. Pooled samples provide an estimate of central tendency and may also reveal information about variation within the population. The development of a pooling strategy requires careful consideration of the type and number of samples collected, the number of pools required and the number of specimens to combine per pool in order to maximise the type and robustness of the data. Creative pooling strategies can be used to explore exposure–outcome associations, and extrapolation from other larger studies can be useful in identifying elevated exposures in specific individuals. The use of pooled specimens is advantageous as it saves significantly on analytical costs, may reduce the time and resources required for recruitment and, in certain circumstances, allows quantification of samples approaching the limit of detection. In addition, the use of pooled samples can provide population estimates while avoiding ethical difficulties that may be associated with reporting individual results.

Journal ArticleDOI
TL;DR: The results challenge the widely held assumption that crowding can be wholly explained by compulsory pooling and are directly compared using an analytical approach sensitive to both alternatives.
Abstract: Visual perception is dramatically impaired when a peripheral target is embedded within clutter, a phenomenon known as visual crowding. Despite decades of study, the mechanisms underlying crowding remain a matter of debate. Feature pooling models assert that crowding results from a compulsory pooling (e.g., averaging) of target and distractor features. This view has been extraordinarily influential in recent years, so much so that crowding is typically regarded as synonymous with pooling. However, many demonstrations of feature pooling can also be accommodated by a probabilistic substitution model where observers occasionally report a distractor as the target. Here, we directly compared pooling and substitution using an analytical approach sensitive to both alternatives. In four experiments, we asked observers to report the precise orientation of a target stimulus flanked by two irrelevant distractors. In all cases, the observed data were well described by a quantitative model that assumes probabilistic substitution, and poorly described by a quantitative model that assumes that targets and distractors are averaged. These results challenge the widely held assumption that crowding can be wholly explained by compulsory pooling.

Journal ArticleDOI
TL;DR: In this paper, a trend centered pooling approach is proposed for regionalization in which pooling groups are created based on the form of trend found in the at-site data.

Patent
01 Jan 2014
TL;DR: In this paper, a method for pooling a taxi, sharing a private car and hitchhiking is proposed, which comprises the steps of entering demand information, conducting matching according to the demand information to obtain car pooling information and automatically generating a cost list detail and a historical note finally.
Abstract: The invention discloses a method for pooling a taxi, sharing a private car and hitchhiking. The method comprises the steps of entering demand information, conducting matching according to the demand information to obtain car pooling information, enabling a client terminal to display the car pooling information in an imaging mode, obtaining information of the taxi and taking the taxi according to the car pooling information or taking the private car according to the car pooling information after a user validates that the taxi or the private car sets out on time, and automatically generating a cost list detail and a historical note finally. Due to the fact that appointment can be carried out on taxi pooling, private car sharing and hitchhiking, the user can conveniently plan for traveling in advance. Meanwhile, limiting conditions are arranged in the demand information, car pooling among acquaintances can be achieved, therefore, car pooling is relatively safe and reliable, the car pooling information is displayed in the imaging mode and is vivid and visual, and many-to-many selection matching among users can be conveniently carried out. Finally, generation of both the cost list detail and the historical note is automatically executed, user operation is simplified, and user experience is good. The invention also discloses a system for pooling the taxi, sharing the private car and hitchhiking at the same time.

Book ChapterDOI
01 Jan 2014
TL;DR: In this article, the main concepts of logistics pooling and their applications to urban delivery services are presented, and an information systems-based framework for planning and evaluation is described, from which a set of indicators are identified.
Abstract: In logistics and freight transport, collaboration and pooling are popular strategies, in practice, that remain less explored in research. In recent years, collaborative transportation and pooling have become urban logistics alternatives to classical urban consolidation centres, but remain in a developmental stage. This chapter proposes a framework for urban logistics pooling, strategic planning and ex-ante evaluation. First, the main concepts of logistics pooling and their applications to urban delivery services are presented. Then, an information systems-based framework for planning and evaluation is described, from which a set of indicators are identified. To illustrate this framework, a case study from a French urban logistics pooling system is proposed.

Posted Content
TL;DR: The first face feature extraction method by directly pooling raw patches is proposed, which can capture strong structural information of individual faces and achieves state-of-the-art results on several face recognition datasets.
Abstract: We propose a very simple, efficient yet surprisingly effective feature extraction method for face recognition (about 20 lines of Matlab code), which is mainly inspired by spatial pyramid pooling in generic image classification. We show that features formed by simply pooling local patches over a multi-level pyramid, coupled with a linear classifier, can significantly outperform most recent face recognition methods. The simplicity of our feature extraction procedure is demonstrated by the fact that no learning is involved (except PCA whitening). We show that, multi-level spatial pooling and dense extraction of multi-scale patches play critical roles in face image classification. The extracted facial features can capture strong structural information of individual faces with no label information being used. We also find that, pre-processing on local image patches such as contrast normalization can have an important impact on the classification accuracy. In particular, on the challenging face recognition datasets of FERET and LFW-a, our method improves previous best results by more than 10% and 20%, respectively.

Journal ArticleDOI
TL;DR: In this paper, the authors applied the pooling concept to a collection of small and medium-sized western France food suppliers serving the same retail chain, and compared the efficiency of the existing transport organisation with various pooling scenarios.
Abstract: Supply chains pooling is an emergent strategy for improving logistical performance. The pooling concept consists in transferring the effort of coordination for consolidating independent operators' flows towards an ad hoc pooled system. This organisation results from a design of a pooled logistics network by merging different supply chains to share transport and logistics resources in order to improve logistics performance. In this case study, the pooling concept is applied to a collection of small and medium-sized western France food suppliers serving the same retail chain. In order to demonstrate the efficiency of the pooling, the existing transport organisation was compared to various pooling scenarios. The methodology consisted in accessing a current situation through a survey of the flow of goods at one of the main distribution centre of the studied supply network, then comparing this situation with three other pooling scenarios. Using supply network optimisation models, these scenarios were assessed considering cost and CO2 emission levels. This study demonstrates the interest of transport pooling in the case independent shipping networks of Small and Medium Enterprises compared to the partially know existing strategies adopted by logistics service providers for less than truckload shipments. Moreover, it suggests that there is no dominant supply organisation and that transport pooling is a new stimulus for network design. These results also bring new research perspectives for generalisation of pooling and gain sharing within large coalitions.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a wireless M2M communication scenario with a massive number of MTCs and showed how to dimension the pool of resources available to each MTC to guarantee the desired reliability of the report delivery within a given deadline.
Abstract: This letter considers a wireless M2M communication scenario with a massive number of M2M devices. Each device needs to send its reports within a given deadline and with certain reliability, e.g., 99.99%. A pool of resources available to all M2M devices is periodically available for transmission. The number of transmissions required by an M2M device within the pool is random due to two reasons-random number of arrived reports since the last reporting opportunity and requests for retransmission due to random channel errors. We show how to dimension the pool of M2M-dedicated resources in order to guarantee the desired reliability of the report delivery within the deadline. The fact that the pool of resources is used by a massive number of devices allows basing the dimensioning on the central limit theorem. The results are interpreted in the context of LTE, but they are applicable to any M2M communication system.

Posted Content
TL;DR: In this article, the authors applied the pooling concept to a collection of small and medium-sized western France food suppliers serving the same retail chain, and compared the efficiency of the existing transport organisation with various pooling scenarios.
Abstract: Supply chains pooling is an emergent strategy for improving logistical performance. The pooling concept consists in transferring the effort of coordination for consolidating independent operators' flows towards an ad hoc pooled system. This organisation results from a design of a pooled logistics network by merging different supply chains to share transport and logistics resources in order to improve logistics performance. In this case study, the pooling concept is applied to a collection of small and medium-sized western France food suppliers serving the same retail chain. In order to demonstrate the efficiency of the pooling, the existing transport organisation was compared to various pooling scenarios. The methodology consisted in accessing a current situation through a survey of the flow of goods at one of the main distribution centre of the studied supply network, then comparing this situation with three other pooling scenarios. Using supply network optimisation models, these scenarios were assessed considering cost and CO2 emission levels. This study demonstrates the interest of transport pooling in the case independent shipping networks of Small and Medium Enterprises compared to the partially know existing strategies adopted by logistics service providers for less than truckload shipments. Moreover, it suggests that there is no dominant supply organisation and that transport pooling is a new stimulus for network design. These results also bring new research perspectives for generalisation of pooling and gain sharing within large coalitions.

Journal ArticleDOI
TL;DR: In this article, the authors consider a signaling model in which receivers observe both the sender's costly signal and a stochastic grade that is correlated with the receiver's type, and derive a necessary and sufficient condition that the grade is sufficiently informative relative to the dispersion of (marginal) signaling costs across types.

Journal ArticleDOI
TL;DR: A cost allocation rule is proposed to allocate the total costs proportional to players’ demand rates, which makes all separate participants and subgroups of participants better off, it stimulates growth of the pool, and it induces players to reveal their private information truthfully.