scispace - formally typeset
Search or ask a question

Showing papers on "Pooling published in 2006"


Journal ArticleDOI
TL;DR: A simulation approach was used to clarify the application of random effects under three common situations for telemetry studies and found that random intercepts accounted for unbalanced sample designs, and models withrandom intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection.
Abstract: 1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

718 citations


Journal ArticleDOI
TL;DR: It is illustrated that resource selection models are part of a broader collection of statistical models called weighted distributions and recommend some promising areas for future development.
Abstract: We review 87 articles published in the Journal of Wildlife Management from 2000 to 2004 to assess the current state of practice in the design and analysis of resource selection studies. Articles were classified into 4 study designs. In design 1, data are collected at the population level because individual animals are not identified. Individual animal selection may be assessed in designs 2 and 3. In design 2, use by each animal is recorded, but availability (or nonuse) is measured only at the population level. Use and availability (or unused) are measured for each animal in design 3. In design 4, resource use is measured multiple times for each animal, and availability (or nonuse) is measured for each use location. Thus, use and availability measures are paired for each use in design 4. The 4 study designs were used about equally in the articles reviewed. The most commonly used statistical analyses were logistic regression (40%) and compositional analysis (25%). We illustrate 4 problem areas in resource selection analyses: pooling of relocation data across animals with differing numbers of relocations, analyzing paired data as though they were independent, tests that do not control experiment wise error rates, and modeling observations as if they were independent when temporal or spatial correlations occurs in the data. Statistical models that allow for variation in individual animal selection rather than pooling are recommended to improve error estimation in population-level selection. Some researchers did not select appropriate statistical analyses for paired data, or their analyses were not well described. Researchers using one-resource-at-a-time procedures often did not control the experiment wise error rate, so simultaneous inference procedures and multivariate assessments of selection are suggested. The time interval between animal relocations was often relatively short, but existing analyses for temporally or spatially correlated data were not used. For studies that used logistic regression, we identified the data type employed: single sample, case control (used-unused), use-availability, or paired use-availability. It was not always clear whether studies intended to compare use to nonuse or use to availability. Despite the popularity of compositional analysis, we do not recommend it for multiple relocation data when use of one or more resources is low. We illustrate that resource selection models are part of a broader collection of statistical models called weighted distributions and recommend some promising areas for future development.

649 citations


Journal ArticleDOI
TL;DR: The authors describe the methods being used to summarize data on diet-cancer associations within the ongoing Pooling Project of Prospective Studies of Diet and Cancer, begun in 1991.
Abstract: With the growing number of epidemiologic publications on the relation between dietary factors and cancer risk, pooled analyses that summarize results from multiple studies are becoming more common. Here, the authors describe the methods being used to summarize data on diet-cancer associations within the ongoing Pooling Project of Prospective Studies of Diet and Cancer, begun in 1991. In the Pooling Project, the primary data from prospective cohort studies meeting prespecified inclusion criteria are analyzed using standardized criteria for modeling of exposure, confounding, and outcome variables. In addition to evaluating main exposure-disease associations, analyses are also conducted to evaluate whether exposure-disease associations are modified by other dietary and nondietary factors or vary among population subgroups or particular cancer subtypes. Study-specific relative risks are calculated using the Cox proportional hazards model and then pooled using a random- or mixed-effects model. The study-specific estimates are weighted by the inverse of their variances in forming summary estimates. Most of the methods used in the Pooling Project may be adapted for examining associations with dietary and nondietary factors in pooled analyses of case-control studies or case-control and cohort studies combined.

289 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the tradeoff between the benefits of labour pooling and the costs of labour poaching in a duopoly game and show that co-location is not in general the non-cooperative equilibrium outcome.

235 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: Three spatial pooling methods for the second stage of perceptual image quality assessment are investigated: Minkowski pooling, local quality/distortion-weighted Pooling, and information content- Weighted pooling.
Abstract: Many recently proposed perceptual image quality assessment algorithms are implemented in two stages. In the first stage, image quality is evaluated within local regions. This results in a quality/distortion map over the image space. In the second stage, a spatial pooling algorithm is employed that combines the quality/distortion map into a single quality score. While great effort has been devoted to developing algorithms for the first stage, little has been done to find the best strategies for the second stage (and simple spatial average is often used). In this work, we investigate three spatial pooling methods for the second stage: Minkowski pooling, local quality/distortion-weighted pooling, and information content-weighted pooling. Extensive experiments with the LIVE database show that all three methods may improve the prediction performance of perceptual image quality measures, but the third method demonstrates the best potential to be a general and robust method that leads to consistent improvement over a wide range of image distortion types.

207 citations


Journal ArticleDOI
TL;DR: In this paper, the sum capacity of the multiple access, uplink channel with multiple-input-multiple-output (MIMO) links with interference from other cells is investigated.
Abstract: Scaling results for the sum capacity of the multiple access, uplink channel are provided for a flat-fading environment, with multiple-input-multiple-output (MIMO) links, when there is interference from other cells. The classical MIMO scaling regime is considered in which the number of antennas per user and per base station grow large together. Utilizing the known characterizations of the limiting eigenvalue distributions of large random matrices, the asymptotic behavior of the sum capacity of the system is characterized for an architecture in which the base stations cooperate in the joint decoding process of all users (macrodiversity). This asymptotic sum capacity is compared with that of the conventional scenario in which the base stations only decode the users in their cells. For the case of base station cooperation, an interesting "resource pooling" phenomenon is observed: in some cases, the limiting performance of a macrodiversity multiuser network has the same asymptotic behavior as that of a single-user MIMO link with an equivalent amount of pooled received power. This resource pooling phenomenon allows us to derive an elegant closed-form expression for the sum capacity of a new version of Wyner's classical model of a cellular network, in which MIMO links are incorporated into the model.

149 citations


Journal ArticleDOI
TL;DR: In this paper, a piecewise linear RLT formulation is proposed and applied to the class of generalized pooling problems for a combinatorially complex industrial problem containing 156 bilinear terms and 55 binary variables, reducing the gap between upper and lower bounds to within 1.2%.
Abstract: Global optimization strategies are described for a generalization of the pooling problem that is important to the petrochemical, chemical, and wastewater treatment industries. The problem involves both discrete variables, modeling the structure of a flow network, and continuous variables, modeling flow rates, and stream attributes. The continuous relaxation of this mixed integer nonlinear programming problem is nonconvex because of the presence of bilinear terms in the constraint functions. We propose an algorithm to find the global solution using the principles of the reformulation-linearization technique (RLT). A novel piecewise linear RLT formulation is proposed and applied to the class of generalized pooling problems. Using this approach we verify the global solution of a combinatorially complex industrial problem containing 156 bilinear terms and 55 binary variables, reducing the gap between upper and lower bounds to within 1.2%. © 2005 American Institute of Chemical Engineers AIChE J, 2006

144 citations


01 Jan 2006
TL;DR: An interesting "resource pooling" phenomenon is observed: in some cases, the limiting performance of a macrodiversity multiuser network has the same asymptotic behavior as that of a single-user MIMO link with an equivalent amount of pooled received power.
Abstract: — Scaling results for the sum capacity of the multiple access,uplink channel are provided for a flat-fading environment, with mul-tiple-input–multiple-output (MIMO) links, when there is interferencefrom other cells. The classical MIMO scaling regime is considered inwhich the number of antennas per user and per base station grow largetogether. Utilizing the known characterizations of the limiting eigenvaluedistributions of large random matrices, the asymptotic behavior of thesum capacity of the system is characterized for an architecture in whichthe base stations cooperate in the joint decoding process of all users(macrodiversity). This asymptotic sum capacity is compared with thatof the conventional scenario in which the base stations only decode theusers in their cells. For the case of base station cooperation, an interesting“resource pooling” phenomenon is observed: in some cases, the limitingperformance of a macrodiversity multiuser network has the same asymp-totic behavior as that of a single-user MIMO link with an equivalentamount of pooled received power. This resource pooling phenomenonallows us to derive an elegant closed-form expression for the sum capacityof a new version of Wyner’s classical model of a cellular network, in whichMIMO links are incorporated into the model.

118 citations


Posted Content
TL;DR: In this article, the authors present a rational model of earnings management, where an informed manager, whose compensation is linked to the stock price, trades off the benefit of boosting the reported earnings against the costs of such manipulation, and the investors rationally interpret his actions and adjust the price accordingly.
Abstract: We present a rational model of earnings management. An informed manager, whose compensation is linked to the stock price, trades off the benefit of boosting the stock price by inflating the reported earnings against the costs of such manipulation. The investors rationally interpret his actions and adjust the price accordingly. When the distribution of true earnings and the compensation scheme are smooth, the conventional equilibrium in this signaling framework is also smooth and fully revealing. In this paper, we show that in the same "smooth" environment there exist equilibria in which kinks and discontinuities emerge endogenously in the distribution of reported earnings. The manager optimally chooses a partially pooling strategy, introducing endogenous noise into his report. The resulting vagueness enables the manager to reduce the average manipulation costs. The equilibrium has perfect revelation of earnings in the right and left tails of the distribution, while for intermediate earnings realizations we get one or more pools that manifest themselves as discontinuities in the distribution of reported earnings. We study the properties of these partially pooling equilibria and suggest applications to financial reporting.

101 citations


Journal ArticleDOI
TL;DR: This paper presents a new non-adaptive combinatorial pooling design that relies on arithmetics, and rests on two intuitive ideas: minimizing the co-occurrence of objects, and constructing pools of constant-sized intersections, which allows unambiguous decoding of noisy experimental observations.
Abstract: In binary high-throughput screening projects where the goal is the identification of low-frequency events, beyond the obvious issue of efficiency, false positives and false negatives are a major concern. Pooling constitutes a natural solution: it reduces the number of tests, while providing critical duplication of the individual experiments, thereby correcting for experimental noise. The main difficulty consists in designing the pools in a manner that is both efficient and robust: few pools should be necessary to correct the errors and identify the positives, yet the experiment should not be too vulnerable to biological shakiness. For example, some information should still be obtained even if there are slightly more positives or errors than expected. This is known as the group testing problem, or pooling problem. In this paper, we present a new non-adaptive combinatorial pooling design: the "shifted transversal design" (STD). It relies on arithmetics, and rests on two intuitive ideas: minimizing the co-occurrence of objects, and constructing pools of constant-sized intersections. We prove that it allows unambiguous decoding of noisy experimental observations. This design is highly flexible, and can be tailored to function robustly in a wide range of experimental settings (i.e., numbers of objects, fractions of positives, and expected error-rates). Furthermore, we show that our design compares favorably, in terms of efficiency, to the previously described non-adaptive combinatorial pooling designs. This method is currently being validated by field-testing in the context of yeast-two-hybrid interactome mapping, in collaboration with Marc Vidal's lab at the Dana Farber Cancer Institute. Many similar projects could benefit from using the Shifted Transversal Design.

91 citations


Journal ArticleDOI
TL;DR: In this paper, the authors generalize Eppen's analysis to arbitrary distributions and show that centralization or pooling of inventories is more valuable when demands are less positively dependent.
Abstract: Eppen (1979) showed that inventory costs in a centralized system increase with the correlation between multivariate normal product demands. Using multivariate stochastic orders, we generalize this statement to arbitrary distributions. We then describe methods to construct models with arbitrary dependence structure, using the copula of a multivariate distribution to capture the dependence between the components of a random vector. For broad classes of distributions with arbitrary marginals, we confirm that centralization or pooling of inventories is more valuable when demands are less positively dependent.

Journal ArticleDOI
TL;DR: In this paper, a critical review of the literature on pooling lead-time risk by simultaneously splitting replenishment orders among several suppliers is conducted and important and persistent limitations of current research are revealed that may make the policy uneconomical, in general.
Abstract: After more than 20 years of extensive study, the policy of pooling lead-time risk by simultaneously splitting replenishment orders among several suppliers continues to attract the attention of researchers. A critical review of the extant literature is conducted. Important and persistent limitations of current research are revealed that may make the policy uneconomical, in general, and more promising research directions are suggested.

Posted Content
TL;DR: In this article, the authors show that the magnitude of the optimal pay-go program and the nature of the underlying risk sharing effects are very sensitive to the chosen combination of risk concepts and stochastic specification of long run aggregate wage income growth.
Abstract: A pay-as-you-go (paygo) pension program may provide intergenerational pooling of risks to individuals’ labor and capital income over the life cycle. By means of illuminating closed form solutions we demonstrate that the magnitude of the optimal paygo program and the nature of the underlying risk sharing effects are very sensitive to the chosen combination of risk concepts and stochastic specification of long run aggregate wage income growth. In an additive way we distinguish between the pooling of wage and capital risks within periods and two different intertemporal risk sharing mechanisms. For realistic parameter values, the magnitude of the optimal paygo program is largest when wage shocks are not permanent and individuals in any generation are considered from a pre-birth perspective, i.e. a “rawlsian risk sharing” perspective is adopted.

Journal ArticleDOI
TL;DR: This work applies four pooling designs to a public dataset and evaluates each method by determining how well the design criteria are met and whether the methods are able to find many diverse active compounds.
Abstract: Discovery of a new drug involves screening large chemical libraries to identify new and diverse active compounds. Screening efficiency can be improved by testing compounds in pools. We consider two criteria for designing pools: optimal coverage of the chemical space and minimal collision between compounds. We apply four pooling designs to a public dataset. We evaluate each method by determining how well the design criteria are met and whether the methods are able to find many diverse active compounds. One pooling design emerges as a winner, but all designed pools clearly outperform randomly created pools.

Proceedings ArticleDOI
06 Aug 2006
TL;DR: This paper demonstrates that the AQUAINT 2005 test collection exhibits bias caused by pools that were too shallow for the document set size despite having many diverse runs contribute to the pools, and suggests modifications to traditional pooling and evaluation methodology that may allow very large reusable test collections to be built.
Abstract: Modern retrieval test collections are built through a process called pooling in which only a sample of the entire document set is judged for each topic. The idea behind pooling is to find enough relevant documents such that when unjudged documents are assumed to be nonrelevant the resulting judgment set is sufficiently complete and unbiased. As document sets grow larger, a constant-size pool represents an increasingly small percentage of the document set, and at some point the assumption of approximately complete judgments must become invalid.This paper demonstrates that the AQUAINT 2005 test collection exhibits bias caused by pools that were too shallow for the document set size despite having many diverse runs contribute to the pools. The existing judgment set favors relevant documents that contain topic title words even though relevant documents containing few topic title words are known to exist in the document set. The paper concludes with suggested modifications to traditional pooling and evaluation methodology that may allow very large reusable test collections to be built.

25 Sep 2006
TL;DR: This report investigates the conditions under which Lewis signaling games evolve to perfect signaling systems under various learning dynamics, and shows that with more than two states suboptimal pooling equilibria can evolve.
Abstract: David Lewis (1969) introduced sender-receiver games as a way of investigating how meaningful language might evolve from initially random signals. In this report I investigate the conditions under which Lewis signaling games evolve to perfect signaling systems under various learning dynamics. While the 2-state/2-term Lewis signaling game with basic urn learning always approaches a signaling system, I will show that with more than two states suboptimal pooling equilibria can evolve. Inhomogeneous state distributions increase the likelihood of pooling equilibria, but learning strategies with negative reinforcement or certain sorts of mutation can decrease the likelihood of, and even eliminate, pooling equilibria. Both Moran and APR learning strategies (Bereby-Meyer and Erev 1998) are shown to promote successful convergence to signaling systems. A model is presented that illustrates how a language that codes state-act pairs in an order-based grammar might evolve in the context of a Lewis signaling game. The terms, grammar, and the corresponding partitions of the state space co-evolve to generate a language whose structure appears to reflect canonical natural kinds. The evolution of these apparent natural kinds, however, is entirely in service of the rewards that accompany successful distinctions between the sender and receiver. Any metaphysics grounded on the structure of a natural language that evolved in this way would track arbitrary, but pragmatically useful distinctions.

Journal ArticleDOI
01 Jul 2006-Genetics
TL;DR: The two-stage design withDNA pooling as a screening tool offers an efficient strategy in genomewide association studies, even when the measurement errors associated with DNA pooling are nonnegligible.
Abstract: DNA pooling is a cost-effective approach for collecting information on marker allele frequency in genetic studies. It is often suggested as a screening tool to identify a subset of candidate markers from a very large number of markers to be followed up by more accurate and informative individual genotyping. In this article, we investigate several statistical properties and design issues related to this two-stage design, including the selection of the candidate markers for second-stage analysis, statistical power of this design, and the probability that truly disease-associated markers are ranked among the top after second-stage analysis. We have derived analytical results on the proportion of markers to be selected for second-stage analysis. For example, to detect disease-associated markers with an allele frequency difference of 0.05 between the cases and controls through an initial sample of 1000 cases and 1000 controls, our results suggest that when the measurement errors are small (0.005), ∼3% of the markers should be selected. For the statistical power to identify disease-associated markers, we find that the measurement errors associated with DNA pooling have little effect on its power. This is in contrast to the one-stage pooling scheme where measurement errors may have large effect on statistical power. As for the probability that the disease-associated markers are ranked among the top in the second stage, we show that there is a high probability that at least one disease-associated marker is ranked among the top when the allele frequency differences between the cases and controls are not <0.05 for reasonably large sample sizes, even though the errors associated with DNA pooling in the first stage are not small. Therefore, the two-stage design with DNA pooling as a screening tool offers an efficient strategy in genomewide association studies, even when the measurement errors associated with DNA pooling are nonnegligible. For any disease model, we find that all the statistical results essentially depend on the population allele frequency and the allele frequency differences between the cases and controls at the disease-associated markers. The general conclusions hold whether the second stage uses an entirely independent sample or includes both the samples used in the first stage and an independent set of samples.

Journal ArticleDOI
TL;DR: It is proposed and examined a method to estimate AUC when dealing with data from pooled and unpooled samples where an LOD is in effect, and it is concluded that pooling is the most efficient cost-cutting strategy when the LOD affects less than 50% of the data.
Abstract: SUMMARY Frequently, epidemiological studies deal with two restrictions in the evaluation of biomarkers: cost and instrument sensitivity. Costs can hamper the evaluation of the effectiveness of new biomarkers. In addition, many assays are affected by a limit of detection (LOD), depending on the instrument sensitivity. Two common strategies used to cut costs include taking a random sample of the available samples and pooling biospecimens. We compare the two sampling strategies when an LOD effect exists. These strategies are compared by examining the efficiency of receiver operating characteristic (ROC) curve analysis, specifically the estimation of the area under the ROC curve (AUC) for normally distributed markers. We propose and examine a method to estimate AUC when dealing with data from pooled and unpooled samples where an LOD is in effect. In conclusion, pooling is the most efficient cost-cutting strategy when the LOD affects less than 50% of the data. However, when much more than 50% of the data are affected, utilization of the pooling design is not recommended.

Book ChapterDOI
01 Jan 2006
TL;DR: A general review of pooling experiments is given here, with additional details and discussion of issues and methods for two important application areas, namely, blood testing and drug discovery.
Abstract: Pooling experiments date as far back as 1915 and were initially used in dilution studies for estimating the density of organisms in some medium. These early uses of pooling were necessitated by scientific and technical limitations. Today, pooling experiments are driven by the potential cost savings and precision gains that can result, and they are making a substantial impact on blood screening and drug discovery. A general review of pooling experiments is given here, with additional details and discussion of issues and methods for two important application areas, namely, blood testing and drug discovery. The blood testing application is very old, from 1943, yet is still used today, especially for HIV antibody screening. In contrast, the drug discovery application is relatively new, with early uses occurring in the period from the late 1980s to early 1990s. Statistical methods for this latter application are still actively being investigated and developed through both the pharmaceutical industries and academic research. The ability of pooling to investigate synergism offers exciting prospects for the discovery of combination therapies.

Journal ArticleDOI
TL;DR: In this article, a nonparametric resampling technique is used to estimate the sampling variability for the target site, as well as for every site that is a potential member of the pooling group.
Abstract: [1] In recent years, catchment similarity measures based on flood seasonality have become popular alternatives for identifying hydrologically homogeneous pooling groups used in regional flood frequency analysis. Generally, flood seasonality pooling measures are less prone to errors and are more robust than measures based on flood magnitude data. However, they are also subject to estimation uncertainty resulting from sampling variability. Because of sampling variability, catchment similarity in flood seasonality can significantly deviate from the true similarity. Therefore sampling variability should be directly incorporated in the pooling algorithm to decrease the level of pooling uncertainty. This paper develops a new pooling approach that takes into consideration the sampling variability of flood seasonality measures used as pooling variables. A nonparametric resampling technique is used to estimate the sampling variability for the target site, as well as for every site that is a potential member of the pooling group for the target site. The variability is quantified by Mahalanobis distance ellipses. The similarity between the target site and the potential site is then assessed by finding the minimum confidence interval at which their Mahalanobis ellipses intersect. The confidence intervals can be related to regional homogeneity, which allows the target degree of regional homogeneity to be set in advance. The approach is applied to a large set of catchments from Great Britain, and its performance is compared with the performance of a previously used pooling technique based on the Euclidean distance. The results demonstrate that the proposed approach outperforms the previously used approach in terms of the overall homogeneity of delineated pooling groups in the study area.

Posted Content
TL;DR: In this article, the standard model of optimum commodity taxation (Ramsey (1927) and Diamond-Mirrlees (1971)) is extended to a competitive economy in which some markets are inefficient due to asymmetric information.
Abstract: This paper extends the standard model of optimum commodity taxation (Ramsey (1927) and Diamond-Mirrlees (1971)) to a competitive economy in which some markets are inefficient due to asymmetric information. As in most insurance markets, consumers impose varying costs on suppliers but …firms cannot associate costs to customers and consequently all are charged equal prices. In a competitive pooling equilibrium, the price of each good is equal to average marginal costs weighted by equilibrium quantities. We derive modi…ed Ramsey-Boiteux Conditions for optimum taxes in such an economy and show that they include general-equilibrium effects which re‡flect the initial deviations of producer prices from marginal costs, and the response of equilibrium prices to the taxes levied. It is shown that condition on the monotonicity of demand elasticities enables to sign the deviations from the standard formula. The general analysis is applied to the optimum taxation of annuities and life insurance.

Journal ArticleDOI
TL;DR: In this article, a survey of saltwater angling in Alaska using split-sample rankings and ratings elicitation methods was used to assess the robustness of the results of the survey.
Abstract: Econometricians modeling Stated Preference (SP) data, and most other types of data, are confronted with the uncomfortable reality that our knowledge of the “true” model is limited, with only certain variables suggested by the application at hand and general classes of functional forms and error structures suggested by the literature. Accepting our limited knowledge, we pursue strategies for analyzing SP data that are more robust to uncertainties in our knowledge of the true model. These include non-parametric and parametric likelihood-based tests of pooling responses from different elicitation formats, and a frequentist-based model averaging approach for estimating willingness to pay functions. We argue that these strategies lead to increased econometric integrity and empower the ultimate users of models, such as policy decision-makers and even juries, to better assess the robustness of the results. We apply these approaches to an SP survey of saltwater angling in Alaska which utilized split-sample rankings and ratings elicitation methods. While an important goal of our paper is to develop practicable modeling strategies that will ultimately lead to more robust conclusions and more confidence by the users of SP results, an equally important goal is to engender a critical discussion of how we can make the analysis of SP data more robust.

Posted Content
TL;DR: In this paper, the authors explore two routes through which the pooling of reserves through the Latin American Reserve Fund (FLAR) could enhance stability and welfare in Latin America and conclude that neither initiative should be thought of in stand-alone terms.
Abstract: The unprecedented accumulation of international reserves by emerging markets raises the question of how to best utilize these funds; in particular if they should be held as a war chest to guard against the risk of financial crisis or if they should be used to recapitalize and strengthen weak banking systems. A third issue is if the resource cost could be limited by pooling the holdings of different central banks and if so, to what objectives should this reserve pool be put. This paper considers these questions with reference to the countries participating in the Latin American Reserve Fund (FLAR). It explores two routes through which the pooling of reserves through FLAR could enhance stability and welfare in Latin America. First, the reserve pool could be used for emergency lending in response to sudden stops. Insofar as the incidence of sudden stops differs across countries, pooling would allow the same reserves to support a larger volume of emergency lending. However, the paper discusses how such a scheme would face significant obstacles due both to the bunching of sudden stops temporally and regionally and to moral hazard problems which are stronger in the case of self-regulating entities like FLAR. The second, more promising alternative that the paper analyses, is the use of a portion of the reserve pool, along with borrowed funds, to purchase contingent debt securities issued by Latin American governments and corporations: domestic-currency inflationindexed bonds, GDP-indexed bonds, commodity-price-indexed bonds. This would help to solve the coordination/first-mover problem that limits the liquidity of markets in these instruments and hinders their acceptance by private investors. In any case, the idea is that neither initiative should be thought of in stand-alone terms. In addition to creating an expanded reserve pool for use in emergency lending, governments could also push ahead with issuing contingent debt securities that reduce their vulnerability to disturbances.

Posted Content
TL;DR: In this paper, the authors extended the standard model of optimum commodity taxation to a competitive economy in which some markets are inefficient due to asymmetric information, and derived modified Ramsey-Boiteux conditions for optimum taxes in such an economy and showed that they include general-equilibrium effects which reflect the initial deviations of producer prices from marginal costs, and the response of equilibrium prices to the taxes levied.
Abstract: This paper extends the standard model of optimum commodity taxation (Ramsey (1927) and Diamond-Mirrlees (1971)) to a competitive economy in which some markets are inefficient due to asymmetric information. As in most insurance markets, consumers impose varying costs on suppliers but firms cannot associate costs to customers and consequently all are charged equal prices. In a competitive pooling equilibrium, the price of each good is equal to average marginal costs weighted by equilibrium quantities. We derive modified Ramsey-Boiteux Conditions for optimum taxes in such an economy and show that they include general-equilibrium effects which reflect the initial deviations of producer prices from marginal costs, and the response of equilibrium prices to the taxes levied. It is shown that condition on the monotonicity of demand elasticities enables to sign the deviations from the standard formula. The general analysis is applied to the optimum taxation of annuities and life insurance.

Journal ArticleDOI
TL;DR: It is pointed out that pooling synergy insights can be translated to situations with controlled workloads by appropriately defining capacity groups in combination with suitable routeing decision rules.

Journal ArticleDOI
TL;DR: A Monte Carlo simulation model of pooling is developed and validated and used to screen a library of β‐galactosidase mutants randomized in the active site to increase their activity toward fucosides and it is shown that this model can successfully predict the number of highly improved mutants obtained via pooling and that pooling does increase thenumber of good mutants obtained.
Abstract: Following diversity generation in combinatorial protein engineering, a significant amount of effort is expended in screening the library for improved variants. Pooling, or combining multiple cells into the same assay well when screening, is a means to increase throughput and screen a larger portion of the library with less time and effort. We have developed and validated a Monte Carlo simulation model of pooling and used it to screen a library of beta-galactosidase mutants randomized in the active site to increase their activity toward fucosides. Here, we show that our model can successfully predict the number of highly improved mutants obtained via pooling and that pooling does increase the number of good mutants obtained. In unpooled conditions, we found a total of three mutants with higher activity toward p-nitrophenyl-beta-D-fucoside than that of the wild-type beta-galactosidase, whereas when pooling 10 cells per well we found a total of approximately 10 improved mutants. In addition, the number of "supermutants", those with the highest activity increase, was also higher when pooling was used. Pooling is a useful tool for increasing the efficiency of screening combinatorial protein engineering libraries.

Posted Content
TL;DR: In this article, the effects of both urban and industrial agglomeration on men's and women's search behavior and on the efficiency of matching were analyzed empirically based on the Italian Labor Force Survey micro-data, which covers 520 randomly drawn Local Labor Market Areas over the four quarters of 2002.
Abstract: I analyze empirically the effects of both urban and industrial agglomeration on men’s and women’s search behavior and on the efficiency of matching. The analysis is based on the Italian Labor Force Survey micro-data, which covers 520 randomly drawn Local Labor Market Areas (66 per cent of the total) over the four quarters of 2002. I compute transition probabilities from non-employment to employment by jointly estimating the probability of searching and the probability of finding a job conditional on having searched, and I test whether these are affected by urbanization, industry localization, labor pooling and family network quality. In general, the main results indicate that urbanization and labor pooling raise job seekers’ chances of finding employment (conditional on having searched), while industry localization and family network quality increase only men’s. Moreover, neither urban nor industrial agglomeration affect nonemployed individuals’ search behavior; although men with thicker family networks search more intensively.

Journal ArticleDOI
TL;DR: A new construction for transversal designs for pooling design is presented and this construction is extended to the error-tolerant case.
Abstract: The study of gene functions requires the development of a DNA library of high quality through much of testing and screening. Pooling design is a mathematical tool to reduce the number of tests for DNA library screening. The transversal design is a special type of pooling design, which is good in implementation. In this paper, we present a new construction for transversal designs. We will also extend our construction to the error-tolerant case.

Journal ArticleDOI
TL;DR: It is shown that averaging quality information into a summary report can enforce pooling in health insurance, and by choice of the right weights in the averaged report, a payer or regulator can induce first-best quality choices.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a model of an efficient market for liquidity, which allows multi-account optimization to ensure fairness, based on the principle that efficient markets are fair.
Abstract: Trading for several clients or accounts at the same time involves two challenges, one technical and one philosophical. The technical challenge is to take account of the market impact of aggregate trading in forming each client9s individual portfolio. The philosophical challenge is to replace the traditional goal of maximizing utility for a single client with the goal of ensuring that all clients are treated fairly in the pursuit of their collective good. These challenges require a model of an efficient market for liquidity, which allows multi-account optimization—to ensure fairness, based on the principle that efficient markets are fair.