scispace - formally typeset
Search or ask a question

Showing papers in "International Statistical Review in 1999"


Journal ArticleDOI
TL;DR: In this paper, a four-dimensional framework has been identified for statistical thinking in empirical enquiry, including an investigative cycle, an interrogative cycle, types of thinking and dispositions.
Abstract: Summary This paper discusses the thought processes involved in statistical problem solving in the broad sense from problem formulation to conclusions. It draws on the literature and in-depth interviews with statistics students and practising statisticians aimed at uncovering their statistical reasoning processes. From these interviews, a four-dimensional framework has been identified for statistical thinking in empirical enquiry. It includes an investigative cycle, an interrogative cycle, types of thinking and dispositions. We have begun to characterise these processes through models that can be used as a basis for thinking tools or frameworks for the enhancement of problem-solving. Tools of this form would complement the mathematical models used in analysis and address areas of the process of statistical investigation that the mathematical models do not, particularly areas requiring the synthesis of problem-contextual and statistical understanding. The central element of published definitions of statistical thinking is "variation". We further discuss the role of variation in the statistical conception of real-world problems, including the search for causes.

1,107 citations


Journal ArticleDOI
TL;DR: In this article, the interaction between new curricular goals for students and alternative methods of assessing student learning is described, and suggestions are offered for teachers of statistics who wish to re-examine their classroom assessment practices in light of these changes.
Abstract: Summary The interaction between new curricular goals for students and alternative methods of assessing student learning is described. Suggestions are offered for teachers of statistics who wish to re-examine their classroom assessment practices in light of these changes. Examples are offered of some innovative assessment approaches that have been used in introductory statistics courses, and current challenges to statistics educators are described. Resume La contribution enueprend use description de l'inteaction entre des objectifs nouveaux envisages pour les etudiants et des methodes alternatives pour evaluer leur apprentissement reel. elle offre des suggestions aux enseignants des statistiques desirant de reexaminer leurs propres pratiques d'evaluation en classe aux lumieres de cette evolution, en donnant pour exemples quelques appoches innovatifs utilises dans des courses introductoires en statistiques et en decivant des demands actuelles adressees aux enseignants.

133 citations



Journal ArticleDOI
TL;DR: In this paper, it was shown that the balanced sampling strategy appears preferable in terms of robustness and efficiency, but the randomized design has certain countervailing advantages, such as the simplicity of the selection process and an established public acceptance that randomization is "fair".
Abstract: Summary Early survey statisticians faced a puzzling choice between randomized sampling and purposive selection but, by the early 1950s, Neyman's design-based or randomization approach had become generally accepted as standard. It remained virtually unchallenged until the early 1970s, when Royall and his co-authors produced an alternative approach based on statistical modelling. This revived the old idea of purposive selection, under the new name of “balanced sampling”. Suppose that the sampling strategy to be used for a particular survey is required to involve both a stratified sampling design and the classical ratio estimator, but that, within each stratum, a choice is allowed between simple random sampling and simple balanced sampling; then which should the survey statistician choose? The balanced sampling strategy appears preferable in terms of robustness and efficiency, but the randomized design has certain countervailing advantages. These include the simplicity of the selection process and an established public acceptance that randomization is “fair”. It transpires that nearly all the advantages of both schemes can be secured if simple random samples are selected within each stratum and a generalized regression estimator is used instead of the classical ratio estimator.

58 citations


Journal ArticleDOI
TL;DR: It is argued that the role of statistician at the top decision-making level of the organisation requires skills in both advanced technical statistical modelling and analysis, and in statistical thinking, and also requires a preparedness to form an appreciation of the business and management imperatives faced by the leaders of an enterprise.
Abstract: Summary A systematic approach to measuring organisational performance is fundamental to the pursuit of business excellence. As such, the area of organisational performance measurement, and its use of data and analysis to inform business decisions, affords statisticians a potentially high value-adding opportunity. To be effective in this area, statisticians need to appreciate the differing requirements for statistical information in various management zones of an enterprise. This paper describes a strategy that seeks to link measurement to all facets of organisational performance, particularly to desired business outcomes, and also to mesh measurement with process improvement in a natural way. The use of statistics and statistical thinking is then discussed in this context, with particular focus on the opportunity for statisticians to have a key role at the top decision-making level of the organisation. We argue that the role requires skills in both advanced technical statistical modelling and analysis, and in statistical thinking. It also requires a preparedness to form an appreciation of the business and management imperatives faced by the leaders of an enterprise, and a willingness to work from this basis.

43 citations


Journal ArticleDOI
TL;DR: In this paper, the fractional Bayes factor (O'Hagan, 1995) is reviewed and general properties of the method are discussed, such as consistency and coherence, and some possible answers and directions for future research are outlined.
Abstract: Summary In the Bayesian approach to model selection and hypothesis testing, the Bayes factor plays a central role. However, the Bayes factor is very sensitive to prior distributions of parameters. This is a problem especially in the presence of weak prior information on the parameters of the models. The most radical consequence of this fact is that the Bayes factor is undetermined when improper priors are used. Nonetheless, extending the non-informative approach of Bayesian analysis to model selection/testing procedures is important both from a theoretical and an applied viewpoint. The need to develop automatic and robust methods for model comparison has led to the introduction of several alternative Bayes factors. In this paper we review one of these methods: the fractional Bayes factor (O'Hagan, 1995). We discuss general properties of the method, such as consistency and coherence. Furthermore, in addition to the original, essentially asymptotic justifications of the fractional Bayes factor, we provide further finite-sample motivations for its use. Connections and comparisons to other automatic methods are discussed and several issues of robustness with respect to priors and data are considered. Finally, we focus on some open problems in the fractional Bayes factor approach, and outline some possible answers and directions for future research. Resume Dans I'approche Bayesienne relative a la selection d'un model et a la verification d'une hypothese, le facteur de Bayes joue une role fondamental. Toutefois le facteur de Bayes est tres sensible aux distributions a priori des parametres. Ceci constitue un probleme surtout en presence d'une faible information a priori en ce qui concerne les parametres des models. La consequence la plus radical de ce fait est que le facteur de Bayes est undetermine quand les distributions a priori non informatives sont utilisees. Cepandant, il est important d'elargir l'approche non informative de l'analyse Bayesienne a l'effet soit de determiner la selection d'un model que de verifier une hypothese. La necessite de developper des methodes automatiques et robustes pour la comparaison des models, a amenea l'introduction des plusieurs facteurs de Bayes alternatifs.Cette etude prend en consideration les resultats principaux relatifs a une de ces methodes, a savvoir le facteur de Bayes fractionnaire. Nous amalysons les caracteristique generales de cettemethode telles que sa consistance et sa coherence. De plus en sus des justifications asyntotiques donnees a l'origine au facteur fractionnaire de Bayes nous apportons d'autres raisons qui demontrent le bien fonde de4 son utilisation dans le domaine d'un echantillonage fini. Nous prenons aussien consideration par comparaison d'autres methodes automatiqueset nous examinations d'autres caracteristiques telles que la robustesse par rapport aux les distributions a priori et aux donnees. En conclusion, nous attirons l'attention sur certains problemes non encore resolus et proposons des solutions qui peuvent etre explorees d'avantage.

42 citations


Journal ArticleDOI
TL;DR: In this article, the authors emphasized the importance of using appropriate statistical methods, such as bootstrapping, for obtaining unbiased estimates of sensitivity and specificity based on linear discriminant analyses and suggested that Bayesian variable selection techniques might have a role to play in the next generation of software for nuclear morphometry.
Abstract: the coefficient for age remained about the same (and significant) whereas the coefficients for the two morphometric variables were close to zero (Breslow et al., 1999). The discussion emphasized the importance of using appropriate statistical methods, such as bootstrapping, for obtaining unbiased estimates of sensitivity and specificity based on linear discriminant analyses. More on the positive side, I suggested that Bayesian variable selection techniques might have a role to play in the next generation of software for nuclear morphometry, whose future as a diagnostic tool at this point is somewhat uncertain.

38 citations


Journal ArticleDOI
TL;DR: The German Federal Statistical Office has developed a framework for an Environmental Economic Accounting System as discussed by the authors based on theoretical considerations, which is designed to quantify the external (environmental) effects of economic activities.
Abstract: Summary Since the “Earth Summit” in Rio de Janeiro 1992 the term “sustainable development” determines the third and current phase of environmental policy. A precise and commonly accepted definition of sustainable development (s.d.) is still missing. There are, nevertheless, some elements in the philosophy of sustainable development which–even if they are still vague–could be used as guidelines for a framework of “green” accounting and sustainability indicators. Based on theoretical considerations, the German Federal Statistical Office has developed a framework for an Environmental Economic Accounting System. The objective is to add meaningful modules to the traditional System of National Accounts which are designed to quantify the external (environmental) effects of economic activities. The framework could already be realised and published to an extent that is relevant for actual policy making in Germany. Resume Depuis la conference mondiale de Rio de jaineiro en 1992 le “development durable” est le mot cle de la troisieme phase de la politique de l'environnement. II n'existe cependant pas encore une precise et largement accptee du developpement durable. II est quand meme possible de degager quelques elements d'une philosophie du developpemetn durable, qui-bien qu'encore vagues-peuvent etre utilises quelques de reference pour les “comptabilites vertes” et les systemes d'indicateurs de developpement durable. A partir de ces considerations theoriques, L'objectif de ce systeme d'information est d'systeme est en partie deja realise et des resultats importants pour la politique de l'envirionnement en Allemagne sont regulierement publies.

37 citations


Journal ArticleDOI
TL;DR: In this paper, the U.S. Bureau of Labor Statistics (BLS) publishes measures of multifactor productivity, which are patterned after the Solow residual, and the rationale for these procedures is explored.
Abstract: Summary The U.S. Bureau of Labor Statistics (BLS) publishes measures of multifactor productivity, which are patterned after the Solow residual. Inputs of capital must be aggregated with inputs of labor. The theory requires a measure of the capital service flow, a rather abstract notion which is rarely observable. The BLS procedures for capital measurement are summarized, and the rationale for these procedures is explored. Implicit measures of capital services are derived from data on property income and data on historical investments which are detailed as to type of asset. Resume Le Bureau of Statistics des Etats-Unis(BLS) publie des masures de productivite totale des facteurs, qui sont derivecs du residu de fonctions de production de Solow. Ceci revient a agreger les intrants de capital et ceux de travail. La theorie requiert la mesure du flux de service rendu par le capital, une notion plutoot abstraite qui est rarement observable. Cet article indique de facon resumee les procedures de mesure du capital utilisees par le BLS et en examine la logique. Des mesures impplicites du service du capital sont deduites de donnees sur les revenus de la propriete et donnees chronologiques sur les investissements, detailles par de biens.

27 citations


Journal ArticleDOI
TL;DR: A brief history of the evolution of official and academic Statistics in India is given in this paper, focusing mainly on the period 1930 to 1960 but traces its origins in antiquity and recent history.
Abstract: Summary This is a brief history of the evolution of official and academic Statistics in India which focuses mainly on the period 1930 to 1960 but traces its origins in antiquity and recent history. We also comment on how Statistics has continued to evolve since the 1960's. This is a history of both institutions and people, who built and shaped them, and of ideas. Resume Ceci est une coute histoire de l'evolution de la Statistique officielle et acadeemique en Inde. Nous retracons l'hisoire a patir des origines de l'antiquite jusqu'a l'histoire plus recente mais potons plus d'atention sur la peiode des annees 1930 a 1960. Ceci et lhistoire d'institutions, des gens qui les batirent et les formerent eet d'idees.

22 citations


Journal ArticleDOI
Alan G. White1
TL;DR: In this paper, the authors present an up-to-date survey of the principal sources of measurement error or bias in the Consumer Price Index (CPI), including the commodity substitution bias, the outlet substitution bias and the elementary index bias.
Abstract: Summary The Consumer Price Index (CPI) measures the cost of purchasing a fixed basket of goods at a fixed sample of outlets over time, and can be thought of as a practical approximation to a "true" cost-ofliving index, and a measure of general inflation for the economy. In more recent times, concerns over the possibility that the U.S. CPI overstates the rate of inflation have grown. Annual changes in the CPI are used to adjust social security benefits, and wage contracts are often indexed to CPI changes. To the extent that the CPI overstates the rate of inflation individuals are being compensated for changes in the cost-of-living that have not occurred-with enormous implications for government fiscal budgets. This paper presents an up-to-date survey of the principal sources of measurement error or bias in the CPI. A number of sources of bias are examined, including the commodity substitution bias, the outlet substitution bias, and the elementary index bias. Traditional bilateral index number theory assumes that the number of goods remains constant over the pricing period and furthermore, that the goods are of unchanging quality. Changes in either of these give rise to two further biases: the new goods bias and the quality bias.

Journal ArticleDOI
TL;DR: The notion of indeterminacy was introduced by Gauss as discussed by the authors, who showed that least squares fails to produce a unique solution only when the problem is indeterminate, and also related the argument to twentieth-century discussions of estimability and identifiability.
Abstract: Summary Gauss showed that least squares fails to produce a unique solution only when the problem is indeterminate. This note considers his argument and the notion of indeterminacy underlying it. It also relates the argument to twentieth-century discussions of estimability and identifiability.

Journal ArticleDOI
TL;DR: In this article, the problem of mismasurement or difficult measurement of complex (multidimensional) and dynamic social phenomena is linked with the social science gap and the need for mastering complexity and instability, separating the voluntary from the involuntary, the intended from the unintended, opportunities from risks and dominating the uncertain implications of social change.
Abstract: Summary Societal change, which takes a variety of directions and forms and in no way can be assimilated or reduced to a single dimension, is often accompanied by a perception of insufficient understanding and lack of control There is a frustrated need for mastering complexity and instability, separating the voluntary from the involuntary, the intended from the unintended, opportunities from risks, getting to the real causes and dominating the uncertain implications of social change Social change catches us unprepared and confused In this context statistics are generally considered a fundamental instrument of knowledge, but also part of the problem! In the public debate and in the specialized literature, the ability to measure social phenomena through current statistics and indicators is increasingly questioned Data-it is claimed-are lacking, particularly longitudinal data; their quality (accuracy, relevance, timeliness, comparability, etc) should be improved; indicators do not provide early warning signals, policy performance evaluation, and a precise indication of outcomes Statistics cannot be used as a reliable and timely basis for decision making by individuals, organizations, governments, and for understanding these decisions In some cases, statistics have been accused of giving a misleading and false picture of reality: do we measure the real extent of social exclusion and unemployment? Do we fully capture the quality of life and the degradation of the environment? Mismeasurement has been deemed by some commentators as being responsible for the wrong focus in inflation and stabilization policies, science and technology, unemployment and poverty The productivity paradox, the informal economy, failure to measure welfare and the quality of urban life are instances where statistics do not seem to provide complete and satisfactory answers to the demand for information and knowledge Our paper illustrates how, quite independently of measurement techniques and data production processes, the inadequacy of the conceptual framework may explain mismeasurement in relation to complex (multidimensional) and dynamic social phenomena It is then to social theories, explanations and interpretations that statisticians need to turn, in order to come to grips with the new challenges in social measurement We will develop this thesis looking at a few cases where measurement issues can be connected to both theoretical and empirical difficulties The statistical gap which reveals itself in the mismeasurement or difficult measurement of social phenomena is closely interconnected with the social science gap Only close collaboration between statisticians and social scientists can bring about continuous advancement in social science and quality improvement in social statistics Resume Le changement sociale, qui prends directions et formes diverses et qui ne peut en aucune facon etre assimile ou redui a une seule dimension, est souvent accompagne par la perception d'une comprehension insuffisalte et d'une manque de controle II ya un besoin frustre de maitriser la complexite et l'instabilitee, tout en separant le volontaire de l'involontaire, l'intentionnle du non voulu, les opportunites des risques, pour arriver aux causes reelles et dominer les implication uncertaines du changement sociale Le changement sociale nous prends au depourvu et confus Dan se contexte, la statistique est generalement consideree un instrument fondamental de la connaissance, mais aussi une partie meme du probleme! Dans les debats publics, ainsi que dans la literature specialisee la capacite de mesurer les phelnomenes au moyen de la statistique courante et les indicateurs vient de plus en plus mise endoute Les donnees-on dit-manquent, en particulier les donnees longitudinelles; leur qualite (precision, pertinence, opportunite, comparabilite, etc)doit etre amelioree les indicateurs ne fournissent pas des signaux d'alerte precoce, ni l'evaluation de l'accqmplissement des politiques, ni ne indication precise des resultats Doncon ne peut pas se servir de la statistique comme base croyable et opportune pour le processus decisionnel des individus, des organisations et de gouvernements, ni pour comprende ces decisions En quelques ca, la statistique a ete accusee de donner une image trompeuse eet fausse de la realite est-ce que nous measurons l'extension reelle de l'exclusion sociale et du chomage? est-ce que nous capturons entierement la qualite de la vie et la dtgradation de I'environnement? La mauvaise mesure a ktk jug par quelques commentatem comme la responsable de la fausse mise au point de I'inflation et des politiques de stabilisation, de la science et de la technologie, du chhage et de la pauvretk Le paradoxe de la productivitk, I'tconomie informelle, le dtfaut de mesurer le bien-Stre et la qualitt de la vie urbaine se sont des exemples oil la statistique ne semble pas apporter de kponses complbtes et satisfaisantes ?J la demande d'information et de connaissance Notre thbse dtmontre que, d'une fqon tout 21 fait independante des mesures techniques et des processus de production des donnks, l'ttat incomplet du cadre cenceptuel peut expliquer la mauvaise mesure relativement h phenornbnes sociaux complexes (multidimensionnels) et dynamiques C'est alors aux thtories sociales, aux explications et aux interprttations que les statisticiens doivent se diriger pour en venir aux prises avec les nouveaux dkfis de la mesure sociale Nous dtvelopperons cette these h I'aide de I'analyse de quelques cas ou la question de la mesure peut se rapporter hdes difficultks aussi bien thtoretiques que empiriques Le trou statistique, qui se fait danc connaitre a travers la rnauvaise mesure ou la mesure difficile des phenomtnes sociaux, est Ttroitement lit au trou de la science sociale Seulement la collaboration entre les statisticiens et les sanvants sociaux peut amener un progs continu de la science sociale et une amklioration de la qualitt de la statistique sociale

Journal ArticleDOI
TL;DR: In this paper, the main results obtained in semi-and non-parametric Bayesian analysis of duration models are reviewed in line with Ferguson's pioneering papers, and a Bayesian semiparametric version of the proportional hazards model is considered.
Abstract: The object of this paper is to review the main results obtained in semi- and non-parametric Bayesian analysis of duration models. Standard nonparametric Bayesian models for independent and identically distributed observations are reviewed in line with Ferguson's pioneering papers. Recent results on the characterization of Dirichlet processes and on nonparametric treatment of censoring and of heterogeneity in the context of mixtures of Dirichlet processes are also discussed. The final section considers a Bayesian semiparametric version of the proportional hazards model.

Journal ArticleDOI
TL;DR: This paper draws on experiences at the Australian Bureau of Statistics in the development and use of a central repository (the “Information Warehouse”) to manage data and metadata and makes reference to comparable initiatives at other national statistical agencies.
Abstract: Summary Faster and more versatile technology is fuelling user demand for statistical agencies to produce an ever wider range of outputs, and to ensure those outputs are consistent and mutually related to the greatest extent possible. Statistical integration is an approach for enhancing the information content of separate statistical collections conducted by an agency, and is necessary for consistency. It has two aspects-conceptual and physical-the former being a prerequisite for the latter. This paper focuses on methods for achieving statistical integration through better management of metadata. It draws on experiences at the Australian Bureau of Statistics in the development and use of a central repository (the “Information Warehouse”) to manage data and metadata. It also makes reference to comparable initiatives at other national statistical agencies. The main conclusions are as follows. First, a prototyping approach is required in developing new functionality to support statistical integration as it is not clear in advance what tools are needed. Second, metadata from separate collections cannot easily be rationalised until they have been loaded to a central repository and are visible alongside one another so their inconsistencies are evident. Third, to be effective, conceptual integration must be accompanied by physical integration. Fourth, there is great scope for partnerships and exchange of ideas between agencies. Finally, statistical integration must be built into the ongoing collection processes and viewed as a way of life. Resume La vitesse et la versatilite de la nouvelle technologie generent une demande des utilisateurs pour avoir une plus vaste gamme de produits des agences statistiques qui doivent s'assurer que ces produits sont plus coherents et relies entre eux Iorsque possible. l'integration statistique est un facon d'ameliorer le contenu des enquetes independantes menees par un bureau de statistiques. Elle est necessaire pour assuer la coherence des donnees. Elle a deux aspects-conceptuel et physique-le premier aspect etant une condition pour le deuxieme. Cet article s'inte aux methodes permettant d'asurer l' integration statistique par une meilleure gestion des donneees. l'article inclue des experiences du Bureau ausralien de statistiques au suject du developpement et de l'usage d'un repertoire central (le “information Warehouse”) pour les donnees et les metadonnees. II fait aussi reference aux activites similaires chez quelques autres bureaux nationaux de statistiques. Les conclusions principales sont les suivantes. Premierement, quand on developpe des fonctions nouvelles pour appuyer l'integration statistique, il faut utiliser une approche prototype parce que on ne connait pas a l'avance les outils necessaires. deuxiemement, on ne peut pas rationaliser facilement les metadonnees des enquetes independantes tant quelles ne sont pas dans le repetoire et visibles les unes a cote des aures pour que leurs incoherences soient manifestes. Troisiemement, il faut appuyer l'integration conceptuelle avec l'integration physique. Quatiemement, il y a plusieurs possibilites pour des asociations et des echanges d'idees entre les bureux nationaux de statistiques. finalement, on doit inclure l'integration statistique a l'interieur des enquetes et la considerer comme un mode de vie.

Journal ArticleDOI
David S. Moore1
TL;DR: In Smith (1997) it is argued that the authors must recognise the limitations of statistical thinking and not make grandiose claims for the universality of their discipline based on the fact that variation is universal.
Abstract: is hard to justify. A common argument is to say that if the individual were drawn at random then the error would be random, but this is random sampling error not random process error. It was this type of argument that caused problems for WP's students and they were right to be troubled. If as statisticians we claim to have "correct" methods of model specification then in many areas we will be ignored and our valuable contributions to design and measurement will be lost. In Smith (1997) I argue that we must recognise the limitations of statistical thinking and not make grandiose claims for the universality of our discipline based on the fact that variation is universal. The real challenge of this paper is to teachers. How can the ideas of scientific thinking be incorporated into the teaching of statistics? If problems are context specific then would it be better to teach statistical thinking to applied scientists, who should be familiar with at least one context, than to mathematicians who work in a context free zone? Certainly I find teaching engineers a more rewarding experience than teaching mathematicians because they are problem driven. Perhaps mathematicians should be forced to study an applied science before they embark on a statistics


Journal ArticleDOI
TL;DR: In this paper, a systematic approach to evaluate the performance of national statistical systems is proposed, starting from the so-called Fundamental Principles of Official Statistics, which were adopted by the United Nations some time ago.
Abstract: Summary This article proposes a systematic approach to evaluating the performance of national statistical systems. Its starting point are the so-called Fundamental Principles of Official Statistics, which were adopted by the United Nations some time ago. The aim is to translate the principles into operational terms and concrete questions about ‘how we are measuring up’. Resume Il y a quelques annees, les nations Unies ont approuvees des Principes fondamentaux pour la statistique officielle Naturellement, ces Principes sont formules d'une maniere assez generale et abstracte. Le but de l'auteur est de rendre les Principes un peu plus concretes et pratiques, par y poser des questions specifiques. Les reponses pourraient illuminer la performance de la statistique officielle des pays divers.


Journal ArticleDOI
TL;DR: In this paper, the authors develop a general model of statistical thinking, based on existing literature, but also on in-depth interviews they did with students and statisticians as well as on experience from working as consultants in statistics.
Abstract: It is a widely shared concern that students should learn to think statistically. However, it is still unclear what we mean exactly by "statistical thinking" and, secondly, it is unclear how we can organize adequate learning processes supporting its development. Wild & Pfannkuch rightly criticize that the advice "let the students do projects" is not enough. Learning statistical thinking just by "osmosis" cannot be the answer. The authors are convinced that the ". .. cornerstone of teaching in any area is the development of a theoretical structure with which to make sense of experience, to learn from it and transfer insights to others." (p. 224). This is the background motivation for the authors' fresh look at the question "What is statistical thinking?" They develop a general model of statistical thinking, based on existing literature, but also on in-depth interviews they did with students and statisticians as well as on experience from working as consultants in statistics.

Journal ArticleDOI
TL;DR: Main features and key issues of the Chinese population census and the census data are examined and some fundamental considerations in building a computerized census data system and concerning the ways in which a system might be developed are discussed.
Abstract: China has conducted four population censuses since 1949. A large amount of important information about population education employment migration and urbanization was collected in the most recent 1990 census. This paper will examine main features and key issues of the Chinese population census and the census data. Some fundamental considerations in building a computerized census data system and concerning the ways in which a system might be developed will be discussed. The main objectives and features of the on-going Population GIS of China project will also be examined. (EXCERPT) (SUMMARY IN FRE)


Journal ArticleDOI
TL;DR: This paper describes international statistical standards in the context of the product life cycle, which is partitioned into four main “megaphases”, namely: conception, development, delivery and maturity and death.
Abstract: Summary This paper describes international statistical standards in the context of the product life cycle. To set the stage, the historical evolution of standardization is traced from the Xia Dynasty of China to the present. The transition from local standards geared for manufacturing to national and then international standards is highlighted with acceptance sampling standards. International statistical standards now cover a broad range of topics beyond acceptance sampling, so a scheme is needed to organize them into a coherent structure. The product Life cycle provides just such a framework. The product Life cycle (which is subsumed to include service, as well) is partitioned into four main “megaphases”, namely: conception, development, delivery and maturity and death. Each megaphase is linked to relevant statistical methods in general and statistical standards in particular. A gap analysis identifies potential future directions of statistical standards developments and the attendant role that statisticians can continue to play in this arena. Resume Cet aticle decit les nomes statistiques internationales dans le contexte des differents ccles de vie traverses par un produit. Pour commencer, l'evolution histoique de la normalisation est retracee a patir de la dynastie chinoise Xia jusqu'a nos jours. l'evolution, de normes locales visant principalement la fabrication de produits a des normes nationales, puis internationales, est illustree au travers des normes d'echantillonage pour acceptation. Aujourd'hui, les normes statistiques internationales adressent de nombreux sujest autres que l'echantillonage pour acceptation, et il est devenu necessaire de les regrouper suivantune structure bien definie. Le cycle de vie d'un produit offre cette stucture. Les auteurs ont patitione le cycle de vie d'un produit (oule terme produit inclut les services en tant que produit intangible) en quatre “methodes”: Conception, developpement, fabrication, et maturite et mort. Pour chaque megaphase, une description des methodes statistiques appropriees est offerte suivie d'un resume des normes statistiues. Cette classification mene a nue identification des oportunite et directions pour le futur developpement des normes statistiques et definit le role que les statisticiens peuvent avoir dans ce domaine.



Journal ArticleDOI
TL;DR: The importance of variation is at the core of any understanding of "reality" whether it is the physical reality (Quantum Physics and statistical mechanics including thermodynamics), the biological reality (evolution), cognitive structures (e.g., memetics, see Dennett (1995)), or the social reality (agency theory, Giddens (1984) and the theory of social becoming, Sztompka (1991)).
Abstract: "The world is full of variation" the late quality expert Kaoru Ishikawa said (Ishikawa, 1985). Indeed, variation is at the core of any understanding of "reality" whether it is the physical reality (Quantum Physics and statistical mechanics including thermodynamics), the biological reality (evolution), cognitive structures (e.g., memetics, see Dennett (1995)), or the social reality (agency theory, Giddens (1984) and the theory of social becoming, Sztompka (1991)). The importance of variation was clear to Deming (1993) in his discussion of profound knowledge as well as to his idea historical mentors, Shewhart (1931, 1939) and Lewis (1929). Since everyone has to cope with variation in one way or another this is also the case with top management, only their realm of influence is wider than that of most others and failure to handle variation has more far-reaching consequences. It is crucial that variation is carefully considered in company decision making. Usually, this is also done, but then most often in an unreflective and inconsistent manner.