scispace - formally typeset
Open AccessJournal ArticleDOI

Appropriate Use of Metrics in Research Assessment of Autonomous Academic Institutions

Henk F. Moed
- Vol. 2, Iss: 1, pp 1
Reads0
Chats0
TLDR
In this article, a desktop model for the use of metrics in the assessment of academic research performance was proposed, and a series of alternatives were proposed to the desktop application model: combine metrics and expert knowledge; assess research groups rather than individuals; use indicators to define minimum standards; and use funding formula that reward promising, emerging research groups.
Abstract
Policy highlights • This paper criticizes a “quick-and-dirty” desktop model for the use of metrics in the assessment of academic research performance, and proposes a series of alternatives. • It considers often used indicators: publication and citation counts, university rankings, journal impact factors, and social media-based metrics. • It is argued that research output and impact are multi-dimensional concepts; when used to assess individuals and groups, these indicators suffer from severe limitations: • Metrics for individual researchers suggest a “false precision”; university rankings are semi-objective and semi-multidimensional; informetric evidence of the validity of journal impact measures is thin; and social media-based indicators should at best be used as complementary measures. • The paper proposes alternatives to the desktop application model: Combine metrics and expert knowledge; assess research groups rather than individuals; use indicators to define minimum standards; and use funding formula that reward promising, emerging research groups. • It proposes a two-level model in which institutions develop their own assessment and funding policies, combining metrics with expert and background knowledge, while at a national level a meta-institutional agency marginally tests the institutions’ internal assessment processes. • According to this model, an inappropriate type of metrics use is when a meta-institutional agency is concerned directly with the assessment of individuals or groups within an institution. • The proposed model is not politically neutral. A normative assumption is that of the autonomy of academic institutions. The meta-institutional entity acknowledges that it is the primary responsibility of the institutions themselves to conduct quality control. • Rather than having one meta-national agency defining what is research quality and what is not, and how it should be measured, the proposed model facilitates each institution to define its own quality criteria and internal policy objectives, and to make these public. • But this freedom of institutions is accompanied by a series of obligations. As a necessary condition, institutions should conceptualize and implement their internal quality control and funding procedures. • Although a meta-institutional agency may help to improve an institution’s internal processes, a repeatedly negative outcome of a marginal test may have negative consequences for the institution’s research funding. This paper discusses a subject as complex as the assessment of scientific-scholarly research for evaluative purposes. It focuses on the use of informetric or bibliometric indicators in academic research assessment. It proposes a series of analytical distinctions. Moreover, it draws conclusions regarding the validity and usefulness of indicators frequently used in the assessment of individual scholars, scholarly institutions and journals. The paper criticizes a so called desktop application model based upon a set of simplistic, poorly founded assumptions about the potential of indicators and the essence of research evaluation. It proposes a more reflexive, theoretically founded, two-level model for the use of metrics of academic research assessment.

read more

Content maybe subject to copyright    Report

Citations
More filters

The San Francisco Declaration on Research Assessment

TL;DR: The San Francisco Declaration on Research Assessment (DORA) as discussed by the authors was initiated by the American Society for Cell Biology (ASCB) together with a group of editors and publishers of scholarly journals, recognizing the need to improve the ways in which the outputs of scientific research are evaluated.
Journal ArticleDOI

The new research assessment reform in China and its implementation

TL;DR: In this paper, the authors provide constructive ideas for the implementation of the new policy, based on international experiences, and suggest these possible solutions for its implementation: Farewell to "SCI worship" and New priority to local relevance: The optimal balance between globalization and local relevance must be allowed to differ by type and field of research.
Journal ArticleDOI

The h-index formalism

TL;DR: This article provides an overview of the development of the h-index formalism, beginning with the original formulation as provided by Hirsch and move on to the latest versions, omitting generalizations, such as the g-index, and applications in research assessment.
Journal ArticleDOI

Two Decades of Research Assessment in Italy. Addressing the Criticisms

TL;DR: In this paper, the authors address the most important criticisms raised against these research assessment initiatives and checks their arguments against empirical evidence, and also address the controversial issue of unintended and negative consequences of research assessment.
References
More filters
Journal ArticleDOI

Evaluation practices and effects of indicator use : a literature review

TL;DR: A review of the international literature on evaluation systems, evaluation practices, and metrics (mis)uses was written as part of a larger review commissioned by the Higher Education Funding Co....
Journal ArticleDOI

Grand challenges in altmetrics: heterogeneity, data quality and dependencies

TL;DR: In this paper, the authors focus on the current challenges for altmetrics and identify three major issues: heterogeneity, data quality and particular dependencies, with an emphasis on past developments in bibliometrics.
Journal ArticleDOI

Usage bibliometrics

TL;DR: A review of the state-of-the-art in usage-based informetric, i.e., the use of usage data to study the scholarly process, can be found in this article.
Journal ArticleDOI

Usage Bibliometrics

TL;DR: A review of the state-of-the-art in usage-based informetric, i.e., the use of usage data to study the scholarly process, can be found in this paper.
Posted Content

The Journal Impact Factor: A brief history, critique, and discussion of adverse effects

TL;DR: This chapter provides a brief history of the indicator and highlights well-known limitations-such as the asymmetry between the numerator and the denominator, differences across disciplines, the insufficient citation window, and the skewness of the underlying citation distributions.
Related Papers (5)