scispace - formally typeset
Search or ask a question
Institution

University of Virginia

EducationCharlottesville, Virginia, United States
About: University of Virginia is a education organization based out in Charlottesville, Virginia, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 52543 authors who have published 113268 publications receiving 5220506 citations. The organization is also known as: U of V & UVa.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Abstract: A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.

5,683 citations

Journal ArticleDOI
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as discussed by the authors provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.

5,668 citations

Posted Content
TL;DR: TAUT provides a useful tool for managers needing to assess the likelihood of success for new technology introductions and helps them understand the drivers of acceptance in order to proactively design interventions targeted at populations of users that may be less inclined to adopt and use new systems.
Abstract: Information technology (IT) acceptance research has yielded many competing models, each with different sets of acceptance determinants. In this paper, we: (1) review user acceptance literature and discuss eight prominent models, (2) empirically compare the eight models and their extensions, (3) formulate a unified model that integrates elements across the eight models, and (4) empirically validate the unified model. The eight models reviewed are the theory of reasoned action, the technology acceptance model, the motivational model, the theory of planned behavior, a model combining the technology acceptance model and the theory of planned behavior, the model of PC utilization, the innovation diffusion theory, and the social cognitive theory. Using data from four organizations over a six-month period with three points of measurement, the eight models explained between 17 percent and 53 percent of the variance in user intentions to use information technology. Next, a unified model, called the Unified Theory of Acceptance and Use of Technology (UTAUT), was formulated, with four core determinants of intention and usage, and up to four moderators of key relationships. UTAUT was then tested using the original data and found to outperform the eight individual models (adjusted R2 of 69 percent). UTAUT was then confirmed with data from two new organizations with similar results (adjusted R2 of 70 percent). UTAUT thus provides a useful tool for managers needing to assess the likelihood of success for new technology introductions and helps them understand the drivers of acceptance in order to proactively design interventions (including training, marketing, etc.) targeted at populations of users that may be less inclined to adopt and use new systems. The paper also makes several recommendations for future research including developing a deeper understanding of the dynamic influences studied here, refining measurement of the core constructs used in UTAUT, and understanding the organizational outcomes associated with new technology use.

5,658 citations

Journal ArticleDOI
TL;DR: This article showed that ethnic diversity helps explain cross-country differences in public policies and other economic indicators in Sub-Saharan Africa, and that high ethnic fragmentation explains a significant part of most of these characteristics.
Abstract: Explaining cross-country differences in growth rates requires not only an understanding of the link between growth and public policies, but also an understanding of why countries choose different public policies. This paper shows that ethnic diversity helps explain cross-country differences in public policies and other economic indicators. In the case of Sub-Saharan Africa, economic growth is associated with low schooling, political instability, underdeveloped financial systems, distorted foreign exchange markets, high government deficits, and insufficient infrastructure. Africa's high ethnic fragmentation explains a significant part of most of these characteristics.

5,648 citations

Journal ArticleDOI
28 Aug 2015-Science
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

5,532 citations


Authors

Showing all 53083 results

NameH-indexPapersCitations
Joan Massagué189408149951
Michael Rutter188676151592
Gordon B. Mills1871273186451
Ralph Weissleder1841160142508
Gonçalo R. Abecasis179595230323
Jie Zhang1784857221720
John R. Yates1771036129029
John A. Rogers1771341127390
Bradley Cox1692150156200
Mika Kivimäki1661515141468
Hongfang Liu1662356156290
Carl W. Cotman165809105323
Ralph A. DeFronzo160759132993
Elio Riboli1581136110499
Dan R. Littman157426107164
Network Information
Related Institutions (5)
Columbia University
224K papers, 12.8M citations

96% related

University of Pennsylvania
257.6K papers, 14.1M citations

96% related

University of Michigan
342.3K papers, 17.6M citations

96% related

University of Washington
305.5K papers, 17.7M citations

96% related

Stanford University
320.3K papers, 21.8M citations

96% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023189
2022783
20215,566
20205,600
20195,001
20184,586