scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Reframing the Debate on Quality vs Quantity in Research Assessment

11 Feb 2021-DESIDOC Journal of Library & Information Technology (Defence Scientific Information and Documentation Centre)-Vol. 41, Iss: 1, pp 70-71
TL;DR: In this paper, the potency of the combined metric for quality assessment of publications (QP) in India's National Institutional Research Framework (NIRF) exercise in 2020 was evaluated.
Abstract: The debate on quality versus quantity is still persistent for methodological considerations. These two approaches are highly contrasting in their epistemology and contrary to each other. A single composite indicator that reasonably senses both quality and quantity would be significant toward performance. This paper evaluates the potency of the combined metric for quality assessment of publications (QP) in India’s National Institutional Research Framework (NIRF) exercise in 2020. It also suggests a potential improvement in quality measurement to obtain the rankings more rationally with finer tunings.
Citations
More filters
Journal Article
TL;DR: Understanding research productivity is a quintessential need for performance evaluations in the realm of evaluative scientometrics, as well as establishing benchmarks in research evaluation and implementing all-factor productivity.
Abstract: The combination of a variety of inputs (both tangible and intangible) enables the numerous outputs in varying degrees to realize the research productivity. To select appropriate metrics and translate into the practical situation through empirical design is a cumbersome task. A single indicator cannot work well in different situations, but selecting the 'most suitable' one from dozens of indicators is very confusing. Nevertheless, establishing benchmarks in research evaluation and implementing all-factor productivity is almost impossible. Understanding research productivity is, therefore, a quintessential need for performance evaluations in the realm of evaluative scientometrics. Many enterprises evaluate the research performance with little understanding of the dynamics of research and its counterparts. Evaluative scientometrics endorses the measures that emerge during the decision-making process through relevant metrics and indicators expressing the organizational dynamics. Evaluation processes governed by counting, weighting, normalizing, and then comparing seem trustworthy.

4 citations

References
More filters
Journal ArticleDOI
TL;DR: There are significant differences in citation ageing between different research fields, document types, total citation counts, and publication months, and within group differences are more striking; many papers in the slowest ageing field may still age faster than many books in the fastest ageing field.
Abstract: This paper aims to inform choice of citation time window for research evaluation, by answering three questions: (1) How accurate is it to use citation counts in short time windows to approximate total citations? (2) How does citation ageing vary by research fields, document types, publication months, and total citations? (3) Can field normalization improve the accuracy of using short citation time windows? We investigate the 31-year life time non-self-citation processes of all Thomson Reuters Web of Science journal papers published in 1980. The correlation between non-self-citation counts in each time window and total non-self-citations in all 31 years is calculated, and it is lower for more highly cited papers than less highly cited ones. There are significant differences in citation ageing between different research fields, document types, total citation counts, and publication months. However, the within group differences are more striking; many papers in the slowest ageing field may still age faster than many papers in the fastest ageing field. Furthermore, field normalization cannot improve the accuracy of using short citation time windows. Implications and recommendations for choosing adequate citation time windows are discussed.

302 citations


"Reframing the Debate on Quality vs ..." refers background in this paper

  • ...As a comprehensive insight, Wang (2013)4 studied that a larger window produces a far more accurate result; even field-normalization cannot be an effective alternative for using short-term citation window in research evaluations....

    [...]

Journal ArticleDOI
TL;DR: Comment is drawn on key issues raised by the French Academy of Sciences report on bibliometric methods for evaluating individual researchers, as well as recommendations for the integration of quality assessment.
Abstract: Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality--a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating individual researchers, as well as recommendations for the integration of quality assessment. Here, I draw on key issues raised by this report and comment on the suggestions for improving existing research evaluation practices.

87 citations


"Reframing the Debate on Quality vs ..." refers background in this paper

  • ...Sahel (2011)11 also drew on key issues of the ‘quality versus quantity’ for measuring the performance of individual researchers....

    [...]

Journal ArticleDOI
TL;DR: It is argued that, in addition to quantity and quality, a third attribute, which I shall call “consistency,” ν, has to be introduced for a complete three-dimensional (3D) evaluation of the information production process.
Abstract: Dear Sir, In his recent paper, Vinkler (2013, p. 1085) comes to the conclusion that “substantial theoretical work and several case studies are needed to arrive at a widely acceptable solution concerning the characterization of the eminence of publications of scientists and teams, both qualitatively and quantitatively, by a single indicator.” The quantity part is represented by the number of papers, P, in the publication set (for a team or individual scientist), and the quality part is represented by the impact, i = C/P, where C is the total number of citations received by the P papers. I argue that, in addition to quantity and quality, a third attribute, which I shall call “consistency,” ν, has to be introduced for a complete three-dimensional (3D) evaluation of the information production process. There is an interesting parallel with the “3Vs” metaphor of Laney (2011) on 3D data management. One can think of P as indicating volume, impact i as indicating velocity with which the ideas in P papers are communicated through citations, C, and consistency, ν, as indicating the variety (variation) in the quality of the individual papers in the portfolio. For a definition of these terms, let us start with Prathap (2011a, 2011b). Let ci, i = 1 to P, represent the citation sequence of all P papers of any portfolio (say, of an individual scientist or team). Then C = Σci, i = 1 to P is the total number of citations. The impact, i = C/P, becomes a proxy for quality (velocity). Prathap (2011a) showed that it is possible to define second-order energy terms such as E ci = ∑ 2 and X = iC. P itself serves as a measure of the quantity (volume) of effort and is a performance indicator of the zeroth-order. One can think of i and P as two orthogonal components of a 3D performance evaluation protocol. Then, C = Pi can be considered to be a performance indicator of the first order (Prathap, 2011a). If citation sequences are rearranged in monotonically decreasing order, very high skews are often seen because of a possible huge variation in the quality of each paper in the publication set. Thus, two different sets can have the same C, and one could have achieved this with far fewer papers, with a higher quality of overall performance, or with the same number of papers (i.e., same quality) but a higher degree of consistency or evenness. This suggests that C by itself may not be the final word on the measurement of performance. The product X = iC = iP becomes a higher order measure. It is a robust second-order performance indicator (Prathap, 2011a, 2011b). Apart from X, an additional indicator defined by E ci = ∑ 2 also appears as a second-order indicator. The coexistence of X and E allows us to introduce a third attribute that is neither quantity nor quality. In the context of 3D data management, the attribute variety is introduced as a third component (Laney, 2011). In a bibliometric context, the appellation “consistency” may be more meaningful. The simple ratio of X to E can be viewed as the third component of performance, namely, the consistency term ν = X/E. Perfect consistency (ν = 1, i.e., when X = E) is a case of absolutely uniform performance; that is, all papers in the set have the same number of citations, ci = c = i. The greater the skew, the larger is the concentration of the best work in a very few papers of extraordinary impact. The inverse of consistency thus becomes a measure of concentration. Thus, for a complete 3D evaluation of publication activity, we need P, i, and ν. These are the three components of a quantity–quality–consistency or volume–velocity–variety landscape.

30 citations

Journal ArticleDOI
TL;DR: This work starts from metaphysical considerations and proposes introducing a new name called Quasity to describe those quantity terms which incorporate a degree of quality and best measures the output and becomes an energy term which serves as a performance indicator.
Abstract: Quality, Quantity, Performance,? An unresolved challenge in performance evaluation in a very general context that goes beyond scientometrics, has been to determine a single indicator that can combine quality and quantity of output or outcome. Toward this end, we start from metaphysical considerations and propose introducing a new name called Quasity to describe those quantity terms which incorporate a degree of quality and best measures the output. The product of quality and quasity then becomes an energy term which serves as a performance indicator. Lessons from kinetics, bibliometrics and sportometrics are used to build up this theme.

29 citations


"Reframing the Debate on Quality vs ..." refers background in this paper

  • ...Prathap (2011)8 considered the term ‘quasity’ as a measure of performance that incorporates certain attributes of quantity and quality....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors investigate the sensitivity of individual researchers' productivity rankings to the time of citation observation and show that with variation in the evaluation citation window there are variable rates of inaccuracy across the disciplines of researchers.

26 citations