scispace - formally typeset
Journal Article

Understanding research productivity in the realm of evaluative scientometrics

26 Jun 2020-Annals of Library and Information Studies (CSIR-NISCAIR, New Delhi)-Vol. 67, Iss: 1, pp 67-69

TL;DR: Understanding research productivity is a quintessential need for performance evaluations in the realm of evaluative scientometrics, as well as establishing benchmarks in research evaluation and implementing all-factor productivity.

AbstractThe combination of a variety of inputs (both tangible and intangible) enables the numerous outputs in varying degrees to realize the research productivity. To select appropriate metrics and translate into the practical situation through empirical design is a cumbersome task. A single indicator cannot work well in different situations, but selecting the 'most suitable' one from dozens of indicators is very confusing. Nevertheless, establishing benchmarks in research evaluation and implementing all-factor productivity is almost impossible. Understanding research productivity is, therefore, a quintessential need for performance evaluations in the realm of evaluative scientometrics. Many enterprises evaluate the research performance with little understanding of the dynamics of research and its counterparts. Evaluative scientometrics endorses the measures that emerge during the decision-making process through relevant metrics and indicators expressing the organizational dynamics. Evaluation processes governed by counting, weighting, normalizing, and then comparing seem trustworthy.

...read more


Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the potency of the combined metric for quality assessment of publications (QP) in India's National Institutional Research Framework (NIRF) exercise in 2020 was evaluated.
Abstract: The debate on quality versus quantity is still persistent for methodological considerations. These two approaches are highly contrasting in their epistemology and contrary to each other. A single composite indicator that reasonably senses both quality and quantity would be significant toward performance. This paper evaluates the potency of the combined metric for quality assessment of publications (QP) in India’s National Institutional Research Framework (NIRF) exercise in 2020. It also suggests a potential improvement in quality measurement to obtain the rankings more rationally with finer tunings.

1 citations

Journal ArticleDOI
TL;DR: In this paper, a scientometric analysis of global research output of media literacy during last 30 years has been done, which produced 1038 documents on media literacy which have been cited 15.37 per cent citation per item.
Abstract: The study aims to do the scientometric analysis of global research output of media literacy during last 30 years. These 30 years produced 1038 documents on media literacy which have been cited 15.37 per cent citation per item. Most of the articles were published during the block 2017-2020. Multiple co-authorship has been the trend in media literacy research. Primack, B (18), Austin, E. W. (17) and Hobbs, K. (14) are identified as the most prolific authors. Communicar with 96 publications is the most productive journal. Korea, South Africa and Norway registered the highest multiple collaboration ratio (MCR). USA, United Kingdom and Australia are the leading countries in terms of citations received. The co-authorship network reflects 175 clusters about the authors who came together to contribute on media literacy. Further co-occurrence of keywords is given on the basis of author keywords in which media literacy had the total link strength (TLS) of 729 with 329 documents.

1 citations


References
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors show that the distribution of productivity becomes increasingly unequal as a cohort of scientists ages, and that this increase is highly associated with a changing distribution of time spent on research.
Abstract: The highly skewed distributions of productivity among scientists can be partly explained by a process of accumulative advantage. Because offeedback through recognition and resources, highly productive scientists maintain or increase their productivity, while scientists who produce very little produce even less later on. A major implication of accumulative advantage is that the distribution of productivity becomes increasingly unequal as a cohort of scientists ages. Cross-sectional survey data support this hypothesis for chemists, physicists, and mathematicians, who show strong linear increases in inequality with increasing career age. This increase is highly associated with a changing distribution of time spent on research. Another implication of accumulative advantage is also corroborated: the association among productivity, resources and esteem increases as career age increases.

517 citations

Journal ArticleDOI
TL;DR: In this paper, the authors scrutinized the literature on correlates and determinants of publication productivity among scientists and concluded that publication is a critical assessment of research productivity through publication among scientists.
Abstract: This article is a critical assessment of research productivity through publication among scientists. The article scrutinizes the literature on correlates and determinants of publication productivit...

444 citations

Journal ArticleDOI
TL;DR: There are significant differences in citation ageing between different research fields, document types, total citation counts, and publication months, and within group differences are more striking; many papers in the slowest ageing field may still age faster than many books in the fastest ageing field.
Abstract: This paper aims to inform choice of citation time window for research evaluation, by answering three questions: (1) How accurate is it to use citation counts in short time windows to approximate total citations? (2) How does citation ageing vary by research fields, document types, publication months, and total citations? (3) Can field normalization improve the accuracy of using short citation time windows? We investigate the 31-year life time non-self-citation processes of all Thomson Reuters Web of Science journal papers published in 1980. The correlation between non-self-citation counts in each time window and total non-self-citations in all 31 years is calculated, and it is lower for more highly cited papers than less highly cited ones. There are significant differences in citation ageing between different research fields, document types, total citation counts, and publication months. However, the within group differences are more striking; many papers in the slowest ageing field may still age faster than many papers in the fastest ageing field. Furthermore, field normalization cannot improve the accuracy of using short citation time windows. Implications and recommendations for choosing adequate citation time windows are discussed.

250 citations

Journal ArticleDOI
TL;DR: In this paper, the authors propose a measure of research productivity called fractional scientific strength (FSS), in keeping with the microeconomic theory of production, and compare the ranking lists of Italian universities by the two definitions of productivity and show the limits of the commonly accepted definition.
Abstract: Productivity is the quintessential indicator of efficiency in any production system It seems it has become a norm in bibliometrics to define research productivity as the number of publications per researcher, distinguishing it from impact In this work we operationalize the economic concept of productivity for the specific context of research activity and show the limits of the commonly accepted definition We propose then a measurable form of research productivity through the indicator "Fractional Scientific Strength (FSS)", in keeping with the microeconomic theory of production We present the methodology for measure of FSS at various levels of analysis: individual, field, discipline, department, institution, region and nation Finally, we compare the ranking lists of Italian universities by the two definitions of research productivity

170 citations

Journal ArticleDOI
TL;DR: This paper argued that the role of metrics is as a trigger to the recognition of anomalies, rather than as a straight replacement for peer review, and that peer review must be retained as a central element in any research assessment exercise.
Abstract: The use of quantitative performance measures to assess the quality of university research is being introduced in Australia and the UK. This paper presents the case for maintaining a balanced approach. It argues that ‘metrics’ have their place, and can make the process more efficient and cost-effective, but that peer review must be retained as a central element in any research assessment exercise. The role of metrics is as ‘a trigger to the recognition of anomalies’, rather than as a straight replacement for peer review. Copyright , Beech Tree Publishing.

110 citations