scispace - formally typeset
Open AccessJournal ArticleDOI

Are university rankings useful to improve research? A systematic review

Reads0
Chats0
TLDR
Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement.
Abstract
Introduction Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. Methods A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted. Results A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems. Discussion No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Web of Science (WoS) and Scopus: The Titans of Bibliographic Information in Today’s Academic World

Raminta Pranckutė
- 12 Mar 2021 - 
TL;DR: In this paper, the authors present an all-inclusive description of the two main bibliographic DBs by gathering the findings that are presented in the most recent literature and information provided by the owners of the DBs at one place.
Journal ArticleDOI

Measuring the academic reputation through citation networks via PageRank

TL;DR: In this paper, the authors measured a quantitative and reliable proxy of the academic reputation of a given institution and compared their findings with well-established impact indicators and academic rankings by using the PageRank algorithm on the five resulting citation networks.
Journal ArticleDOI

A longitudinal analysis of university rankings

TL;DR: In this paper, the authors present a methodology for assessing the quality of higher education institutions using university rankings, which has become an important tool for assessing quality of institutions in higher education.
Journal ArticleDOI

Scientific output scales with resources. A comparison of US and European universities

TL;DR: It is demonstrated empirically that international rankings are by and large richness measures and, therefore, can be interpreted only by introducing a measure of resources.
Journal ArticleDOI

Experimental, algorithmic, and theoretical analyses for selecting an optimal laboratory method to evaluate working fluid damage in coal bed methane reservoirs

TL;DR: In this paper, the average and relative formation damage in coal bed methane reservoirs were calculated and analyzed using a simple ranking method, statistical screening algorithm, and theoretical method, and the results showed that there is a regular distribution of absolute reservoir damage measured by the laboratory methods; and second, the application priority of permeability measurement methods for formation damage evaluation in coal-bed methane reservoirs is as follows: cuttings pulse decay method, constant flow rate method, nuclear magnetic resonance method, plunger pulse decay, and pressure oscillation method.
References
More filters
Journal ArticleDOI

Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement

TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Journal Article

Preferred reporting items for systematic reviews and meta-analyses: the PRISMA Statement.

TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Journal ArticleDOI

Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement

TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.
Journal ArticleDOI

An index to quantify an individual's scientific research output

TL;DR: The index h, defined as the number of papers with citation number ≥h, is proposed as a useful index to characterize the scientific output of a researcher.
Book

Reliability and Validity Assessment

TL;DR: The paper shows how reliability is assessed by the retest method, alternative-forms procedure, split-halves approach, and internal consistency method.
Related Papers (5)