scispace - formally typeset
Search or ask a question
Institution

American Statistical Association

OtherAlexandria, Virginia, United States
About: American Statistical Association is a other organization based out in Alexandria, Virginia, United States. It is known for research contribution in the topics: Estimator & Normal distribution. The organization has 48 authors who have published 49 publications receiving 5008 citations. The organization is also known as: American Statistical Association, ASA & amstat.org.

Papers
More filters
Journal ArticleDOI
TL;DR: The American Statistical Association (ASA) released a policy statement on p-values and statistical significance in 2015 as discussed by the authors, which was based on a discussion with the ASA Board of Trustees and concerned with reproducibility and replicability of scientific conclusions.
Abstract: Cobb’s concern was a long-worrisome circularity in the sociology of science based on the use of bright lines such as p< 0.05: “We teach it because it’s what we do; we do it because it’s what we teach.” This concern was brought to the attention of the ASA Board. The ASA Board was also stimulated by highly visible discussions over the last few years. For example, ScienceNews (Siegfried 2010) wrote: “It’s science’s dirtiest secret: The ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation.” A November 2013, article in Phys.org Science News Wire (2013) cited “numerous deep flaws” in null hypothesis significance testing. A ScienceNews article (Siegfried 2014) on February 7, 2014, said “statistical techniques for testing hypotheses...havemore flaws than Facebook’s privacy policies.” Aweek later, statistician and “Simply Statistics” blogger Jeff Leek responded. “The problem is not that people use P-values poorly,” Leek wrote, “it is that the vast majority of data analysis is not performed by people properly trained to perform data analysis” (Leek 2014). That same week, statistician and science writer Regina Nuzzo published an article in Nature entitled “Scientific Method: Statistical Errors” (Nuzzo 2014). That article is nowone of the most highly viewedNature articles, as reported by altmetric.com (http://www.altmetric.com/details/2115792#score). Of course, it was not simply a matter of responding to some articles in print. The statistical community has been deeply concerned about issues of reproducibility and replicability of scientific conclusions. Without getting into definitions and distinctions of these terms, we observe that much confusion and even doubt about the validity of science is arising. Such doubt can lead to radical choices, such as the one taken by the editors of Basic andApplied Social Psychology, who decided to ban p-values (null hypothesis significance testing) (Trafimow and Marks 2015). Misunderstanding or misuse of statistical inference is only one cause of the “reproducibility crisis” (Peng 2015), but to our community, it is an important one. When the ASA Board decided to take up the challenge of developing a policy statement on p-values and statistical significance, it did so recognizing this was not a lightly taken step. The ASA has not previously taken positions on specific matters of statistical practice. The closest the association has come to this is a statement on the use of value-added models (VAM) for educational assessment (Morganstein and Wasserstein 2014) and a statement on risk-limiting post-election audits (American Statistical Association 2010). However, these were truly policy-related statements. The VAM statement addressed a key educational policy issue, acknowledging the complexity of the issues involved, citing limitations of VAMs as effective performance models, and urging that they be developed and interpreted with the involvement of statisticians. The statement on election auditing was also in response to a major but specific policy issue (close elections in 2008), and said that statistically based election audits should become a routine part of election processes. By contrast, the Board envisioned that the ASA statement on p-values and statistical significance would shed light on an aspect of our field that is too often misunderstood and misused in the broader research community, and, in the process, provides the community a service. The intended audience would be researchers, practitioners, and science writers who are not primarily statisticians. Thus, this statementwould be quite different from anything previously attempted. The Board tasked Wasserstein with assembling a group of experts representing a wide variety of points of view. On behalf of the Board, he reached out to more than two dozen such people, all of whom said theywould be happy to be involved. Several expressed doubt about whether agreement could be reached, but those who did said, in effect, that if there was going to be a discussion, they wanted to be involved. Over the course of many months, group members discussed what format the statement should take, tried to more concretely visualize the audience for the statement, and began to find points of agreement. That turned out to be relatively easy to do, but it was just as easy to find points of intense disagreement. The time came for the group to sit down together to hash out these points, and so in October 2015, 20 members of the group met at the ASA Office in Alexandria, Virginia. The 2-day meeting was facilitated by Regina Nuzzo, and by the end of the meeting, a good set of points around which the statement could be built was developed. The next 3 months saw multiple drafts of the statement, reviewed by group members, by Board members (in a lengthy discussion at the November 2015 ASA Board meeting), and by members of the target audience. Finally, on January 29, 2016, the Executive Committee of the ASA approved the statement. The statement development process was lengthier and more controversial than anticipated. For example, there was considerable discussion about how best to address the issue of multiple potential comparisons (Gelman and Loken 2014). We debated at some length the issues behind the words “a p-value near 0.05 taken by itself offers only weak evidence against the null

4,361 citations

Journal ArticleDOI
TL;DR: Some of you exploring this special issue of The American Statistician might be wondering if it’s a scolding from pedantic statisticians lecturing you about what not to do with p-values, without offering any real ideas of what to do about the very hard problem of separating signal from noise in data.
Abstract: Some of you exploring this special issue of The American Statistician might be wondering if it’s a scolding from pedantic statisticians lecturing you about what not to do with p-values, without offering any real ideas of what to do about the very hard problem of separating signal from noise in data and making decisions under uncertainty. Fear not. In this issue, thanks to 43 innovative and thought-provoking papers from forward-looking statisticians, help is on the way.

1,761 citations

Journal ArticleDOI
TL;DR: For instance, survey researchers have long been aware that asking people to participate in surveys, through interviews in person or on the telephone, or through a self-administered questionnaire, might entail a sacrifice of time as well as some psychological discomfort, depending on the nature of the inquiry as mentioned in this paper.
Abstract: "RESPONDENT BURDEN" is a relatively recent concern for the survey profession, at least in the term's specific reference to the presumed hardships entailed in being a survey participant. Of course, survey researchers have long been aware that asking people to participate in surveys, through interviews in person or on the telephone, or through a self-administered questionnaire, might entail a sacrifice of time as well as some psychological discomfort, depending on the nature of the inquiry. In fact, warnings against overly long questionnaires or interviews surfaced as far back as the 1920s (e.g., Chapin, 1920), and continued to appear sporadically during the following decades (e.g., Young, 1939; Ruch, 1941). Despite these concerns, however, the profession generally felt that if a survey were competently fielded, with pleasant and tactful inter

175 citations

Book ChapterDOI
01 Jan 2020
TL;DR: C Cobb, Professor Emeritus of Mathematics and Statistics at Mount Holyoke College, posed these questions to an ASA discussion forum as discussed by the authors in February 2014, and they were answered by the moderator.
Abstract: In February 2014, George Cobb, Professor Emeritus of Mathematics and Statistics at Mount Holyoke College, posed these questions to an ASA discussion forum:

61 citations

Journal ArticleDOI
TL;DR: Its aim was to stop the misuse of statistical significance testing, but Robert Matthews argues that little has changed in the 12 months since the ASA's intervention.
Abstract: Its aim was to stop the misuse of statistical significance testing. But Robert Matthews argues that little has changed in the 12 months since the ASA's intervention.

57 citations


Authors
Network Information
Related Institutions (5)
Yale University
220.6K papers, 12.8M citations

71% related

University of Washington
305.5K papers, 17.7M citations

71% related

Johns Hopkins University
249.2K papers, 14M citations

70% related

Duke University
200.3K papers, 10.7M citations

70% related

Columbia University
224K papers, 12.8M citations

70% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20231
20214
20203
20195
20182
20175