scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Ratings and rankings: voodoo or science?

01 Jun 2013-Journal of The Royal Statistical Society Series A-statistics in Society (Blackwell Publishing Ltd)-Vol. 176, Iss: 3, pp 609-634
TL;DR: In this article, the authors measure the importance of a given variable within existing composite indicators via Karl Pearson's "correlation ratio"; they call this measure the main effect, and they discuss to what extent the mapping from nominal weights to main effects can be inverted.
Abstract: Summary. Composite indicators aggregate a set of variables by using weights which are understood to reflect the variables’ importance in the index. We propose to measure the importance of a given variable within existing composite indicators via Karl Pearson's ‘correlation ratio’; we call this measure the ‘main effect’. Because socio-economic variables are heteroscedastic and correlated, relative nominal weights are hardly ever found to match relative main effects; we propose to summarize their discrepancy with a divergence measure. We discuss to what extent the mapping from nominal weights to main effects can be inverted. This analysis is applied to six composite indicators, including the human development index and two popular league tables of university performance. It is found that in many cases the declared importance of single indicators and their main effect are very different, and that the data correlation structure often prevents developers from obtaining the stated importance, even when modifying the nominal weights in the set of non-negative numbers with unit sum.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors put composite indicators under the spotlight, examining the wide variety of methodological approaches in existence and offered a more recent outlook on the advances made in this field over the past years.
Abstract: In recent times, composite indicators have gained astounding popularity in a wide variety of research areas. Their adoption by global institutions has further captured the attention of the media and policymakers around the globe, and their number of applications has surged ever since. This increase in their popularity has solicited a plethora of methodological contributions in response to the substantial criticism surrounding their underlying framework. In this paper, we put composite indicators under the spotlight, examining the wide variety of methodological approaches in existence. In this way, we offer a more recent outlook on the advances made in this field over the past years. Despite the large sequence of steps required in the construction of composite indicators, we focus particularly on two of them, namely weighting and aggregation. We find that these are where the paramount criticism appears and where a promising future lies. Finally, we review the last step of the robustness analysis that follows their construction, to which less attention has been paid despite its importance. Overall, this study aims to provide both academics and practitioners in the field of composite indices with a synopsis of the choices available alongside their recent advances.

468 citations

Journal ArticleDOI
TL;DR: Three tools are presented which help developers and users to investigate effects of weights in composite indicators and case studies related to sustainable development demonstrate the benefits.

256 citations

Journal ArticleDOI
TL;DR: A literature review of papers published after 2002 in leading international journals indexed in a recognised database (JCR) is conducted in order to identify the different MCDM methods used for aggregating single indicators into composite ones.
Abstract: Composite indicators are increasingly recognised as a useful tool in policy analysis and public communication. They provide simple comparisons of units that can be used to illustrate the complexity of our dynamic environment in wide-ranging fields, such as competitiveness, governance, environment, press, development, peacefulness, tourism, economy, universities, etc. Their construction has been dealt with from several angles. Some authors claim that MCDM techniques are highly suitable in multidimensional frameworks when aggregating single indicators into a composite one, since this process involves making choices when combining criteria of different natures, and it requires a number of steps in which decisions must be made. In this paper, we conduct a literature review of papers published after 2002 in leading international journals indexed in a recognised database (JCR), in order to identify the different MCDM methods used for aggregating single indicators into composite ones. They have been classified in five categories: the elementary methods, the value and utility based methods, the outranking relation approach, the data envelopment analysis based methods and the distance functions based methods. In general, our review has shown a clear tendency towards an increasing number of papers that use MCDM methods to construct composite indicators since 2014.

135 citations

Journal ArticleDOI
TL;DR: In this article, the effect of the single-walled carbon nanotube (SWCNT) radius, the temperature and the pulling velocity on interfacial shear stress (ISS) was studied by using the molecular dynamics (MD) simulations.
Abstract: The effect of the single-walled carbon nanotube (SWCNT) radius, the temperature and the pulling velocity on interfacial shear stress (ISS) is studied by using the molecular dynamics (MD) simulations. Based on our MD results, the mechanical output (ISS) is best characterized by the statistical Weibull distribution. Further, we also quantify the influence of the uncertain input parameters on the predicted ISS via sensitivity analysis (SA). First, partial derivatives in the context of averaged local SA are computed. For computational efficiency, the SA is based on surrogate models (polynomial regression, moving least squares (MLS) and hybrid of quadratic polynomial and MLS regressions). Next, the elementary effects are determined on the mechanical model to identify the important parameters in the context of averaged local SA. Finally, the approaches for ranking of variables (SA based on coefficients of determination) and variance-based methods are carried out based on the surrogate model in order to quantify the global SA. All stochastic methods predict that the key parameters influencing the ISS is the SWCNT radius followed by the temperature and pulling velocity, respectively.

132 citations

Journal ArticleDOI
TL;DR: This paper provides tourism scholars and practitioners with a set of statistical guidelines to build composite indicators and with an operative scheme to assess indicators' effectiveness in empirical evaluations.

117 citations

References
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Book ChapterDOI
01 Jan 1985
TL;DR: Analytic Hierarchy Process (AHP) as mentioned in this paper is a systematic procedure for representing the elements of any problem hierarchically, which organizes the basic rationality by breaking down a problem into its smaller constituent parts and then guides decision makers through a series of pairwise comparison judgments to express the relative strength or intensity of impact of the elements in the hierarchy.
Abstract: This chapter provides an overview of Analytic Hierarchy Process (AHP), which is a systematic procedure for representing the elements of any problem hierarchically. It organizes the basic rationality by breaking down a problem into its smaller constituent parts and then guides decision makers through a series of pair-wise comparison judgments to express the relative strength or intensity of impact of the elements in the hierarchy. These judgments are then translated to numbers. The AHP includes procedures and principles used to synthesize the many judgments to derive priorities among criteria and subsequently for alternative solutions. It is useful to note that the numbers thus obtained are ratio scale estimates and correspond to so-called hard numbers. Problem solving is a process of setting priorities in steps. One step decides on the most important elements of a problem, another on how best to repair, replace, test, and evaluate the elements, and another on how to implement the solution and measure performance.

16,547 citations

17 Oct 2011
TL;DR: As a measure of market capacity and not economic well-being, the authors pointed out that the two can lead to misleading indications about how well-off people are and entail the wrong policy decisions.
Abstract: As GDP is a measure of market capacity and not economic well-being, this report has been commissioned to more accurately understand the social progress indicators of any given state. Gross domestic product (GDP) is the most widely used measure of economic activity. There are international standards for its calculation, and much thought has gone into its statistical and conceptual bases. But GDP mainly measures market production, though it has often been treated as if it were a measure of economic well-being. Conflating the two can lead to misleading indications about how well-off people are and entail the wrong policy decisions. One reason why money measures of economic performance and living standards have come to play such an important role in our societies is that the monetary valuation of goods and services makes it easy to add up quantities of a very different nature. When we know the prices of apple juice and DVD players, we can add up their values and make statements about production and consumption in a single figure. But market prices are more than an accounting device. Economic theory tells us that when markets are functioning properly, the ratio of one market price to another is reflective of the relative appreciation of the two products by those who purchase them. Moreover, GDP captures all final goods in the economy, whether they are consumed by households, firms or government. Valuing them with their prices would thus seem to be a good way of capturing, in a single number, how well-off society is at a particular moment. Furthermore, keeping prices unchanged while observing how quantities of goods and services that enter GDP move over time would seem like a reasonable way of making a statement about how society’s living standards are evolving in real terms. As it turns out, things are more complicated. First, prices may not exist for some goods and services (if for instance government provides free health insurance or if households are engaged in child care), raising the question of how these services should be valued. Second, even where there are market prices, they may deviate from society’s underlying valuation. In particular, when the consumption or production of particular products affects society as a whole, the price that individuals pay for those products will differ from their value to society at large. Environmental damage caused by production or consumption activities that is not reflected in market prices is a well-known example.

4,432 citations

Book
04 Feb 2008
TL;DR: In this article, the authors present a method for setting up Uncertainty and Sensitivity Analyses using Monte Carlo and Linear Regression (MCF) models and a set of experiments.
Abstract: Preface. 1. Introduction to Sensitivity Analysi. 1.1 Models and Sensitivity Analysis. 1.1.1 Definition. 1.1.2 Models. 1.1.3 Models and Uncertainty. 1.1.4 How to Set Up Uncertainty and Sensitivity Analyses. 1.1.5 Implications for Model Quality. 1.2 Methods and Settings for Sensitivity Analysis - An Introduction. 1.2.1 Local versus Global. 1.2.2 A Test Model. 1.2.3 Scatterplots versus Derivatives. 1.2.4 Sigma-normalized Derivatives. 1.2.5 Monte Carlo and Linear Regression. 1.2.6 Conditional Variances - First Path. 1.2.7 Conditional Variances - Second Path. 1.2.8 Application to Model (1.3). 1.2.9 A First Setting: 'Factor Prioritization' 1.2.10 Nonadditive Models. 1.2.11 Higher-order Sensitivity Indices. 1.2.12 Total Effects. 1.2.13 A Second Setting: 'Factor Fixing'. 1.2.14 Rationale for Sensitivity Analysis. 1.2.15 Treating Sets. 1.2.16 Further Methods. 1.2.17 Elementary Effect Test. 1.2.18 Monte Carlo Filtering. 1.3 Nonindependent Input Factors. 1.4 Possible Pitfalls for a Sensitivity Analysis. 1.5 Concluding Remarks. 1.6 Exercises. 1.7 Answers. 1.8 Additional Exercises. 1.9 Solutions to Additional Exercises. 2. Experimental Designs. 2.1 Introduction. 2.2 Dependency on a Single Parameter. 2.3 Sensitivity Analysis of a Single Parameter. 2.3.1 Random Values. 2.3.2 Stratified Sampling. 2.3.3 Mean and Variance Estimates for Stratified Sampling. 2.4 Sensitivity Analysis of Multiple Parameters. 2.4.1 Linear Models. 2.4.2 One-at-a-time (OAT) Sampling. 2.4.3 Limits on the Number of Influential Parameters. 2.4.4 Fractional Factorial Sampling. 2.4.5 Latin Hypercube Sampling. 2.4.6 Multivariate Stratified Sampling. 2.4.7 Quasi-random Sampling with Low-discrepancy Sequences. 2.5 Group Sampling. 2.6 Exercises. 2.7 Exercise Solutions. 3. Elementary Effects Method. 3.1 Introduction. 3.2 The Elementary Effects Method. 3.3 The Sampling Strategy and its Optimization. 3.4 The Computation of the Sensitivity Measures. 3.5 Working with Groups. 3.6 The EE Method Step by Step. 3.7 Conclusions. 3.8 Exercises. 3.9 Solutions. 4. Variance-based Methods. 4.1 Different Tests for Different Settings. 4.2 Why Variance? 4.3 Variance-based Methods. A Brief History. 4.4 Interaction Effects. 4.5 Total Effects. 4.6 How to Compute the Sensitivity Indices. 4.7 FAST and Random Balance Designs. 4.8 Putting the Method to Work: the Infection Dynamics Model. 4.9 Caveats. 4.10 Exercises. 5. Factor Mapping and Metamodelling. 5.1 Introduction. 5.2 Monte Carlo Filtering (MCF). 5.2.1 Implementation of Monte Carlo Filtering. 5.2.2 Pros and Cons. 5.2.3 Exercises. 5.2.4 Solutions. 5.2.5 Examples. 5.3 Metamodelling and the High-Dimensional Model Representation. 5.3.1 Estimating HDMRs and Metamodels. 5.3.2 A Simple Example. 5.3.3 Another Simple Example. 5.3.4 Exercises. 5.3.5 Solutions to Exercises. 5.4 Conclusions. 6. Sensitivity Analysis: from Theory to Practice. 6.1 Example 1: a Composite Indicator. 6.1.1 Setting the Problem. 6.1.2 A Composite Indicator Measuring Countries' Performance in Environmental Sustainability. 6.1.3 Selecting the Sensitivity Analysis Method. 6.1.4 The Sensitivity Analysis Experiment and its Results. 6.1.5 Conclusions. 6.2 Example 2: Importance of Jumps in Pricing Options. 6.2.1 Setting the Problem. 6.2.2 The Heston Stochastic Volatility Model with Jumps. 6.2.3 Selecting a Suitable Sensitivity Analysis Method. 6.2.4 The Sensitivity Analysis Experiment. 6.2.5 Conclusions. 6.3 Example 3: a Chemical Reactor. 6.3.1 Setting the Problem. 6.3.2 Thermal Runaway Analysis of a Batch Reactor. 6.3.3 Selecting the Sensitivity Analysis Method. 6.3.4 The Sensitivity Analysis Experiment and its Results. 6.3.5 Conclusions. 6.4 Example 4: a Mixed Uncertainty-Sensitivity Plot. 6.4.1 In Brief. 6.5 When to use What? Afterword. Bibliography. Index.

4,306 citations

Journal ArticleDOI
TL;DR: In this paper, the Analytic Hierarchy Process (AHP) is introduced as a method of measurement with ratio scales and illustrated with two examples, and the axioms and some of the central theoretical underpinnings of the theory are discussed.

2,875 citations