scispace - formally typeset
Search or ask a question
Journal ArticleDOI

How do you define and measure research productivity

01 Nov 2014-Scientometrics (Springer Netherlands)-Vol. 101, Iss: 2, pp 1129-1144
TL;DR: In this paper, the authors propose a measure of research productivity called fractional scientific strength (FSS), in keeping with the microeconomic theory of production, and compare the ranking lists of Italian universities by the two definitions of productivity and show the limits of the commonly accepted definition.
Abstract: Productivity is the quintessential indicator of efficiency in any production system It seems it has become a norm in bibliometrics to define research productivity as the number of publications per researcher, distinguishing it from impact In this work we operationalize the economic concept of productivity for the specific context of research activity and show the limits of the commonly accepted definition We propose then a measurable form of research productivity through the indicator "Fractional Scientific Strength (FSS)", in keeping with the microeconomic theory of production We present the methodology for measure of FSS at various levels of analysis: individual, field, discipline, department, institution, region and nation Finally, we compare the ranking lists of Italian universities by the two definitions of research productivity
Citations
More filters
Journal ArticleDOI
Ludo Waltman1
TL;DR: In this paper, an in-depth review of the literature on citation impact indicators is provided, focusing on the selection of publications and citations to be included in the calculation of citation impact indicator.

774 citations

Posted Content
Ludo Waltman1
TL;DR: An in-depth review of the literature on citation impact indicators with recommendations for future research on normalization for field differences and counting methods for dealing with co-authored publications.
Abstract: Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research.

469 citations

Book
08 Sep 2017
TL;DR: This chapter presents an introduction to the book, and starts with on overview of the value and limits the use of informetric indicators in research assessment, and presents a short history of the field.
Abstract: This chapter presents an introduction to the book. It starts with on overview of the value and limits the use of informetric indicators in research assessment, and presents a short history of the field. It continues with the book’s main assumptions, scope and structure. Finally, it clarifies the terminology used in the book.

106 citations

Journal ArticleDOI
TL;DR: It is shown that only collaboration at intramural and domestic level has a positive effect on research productivity, and all the forms of collaboration are positively affected by research productivity.

100 citations

Journal ArticleDOI
TL;DR: This paper used a large dataset of scholarly publications disambiguated at the individual level to create a map of science where links connect pairs of fields based on the probability that an individual has published in both of them.
Abstract: In recent years scholars have built maps of science by connecting the academic fields that cite each other, are cited together, or that cite a similar literature. But since scholars cannot always publish in the fields they cite, or that cite them, these science maps are only rough proxies for the potential of a scholar, organization, or country, to enter a new academic field. Here we use a large dataset of scholarly publications disambiguated at the individual level to create a map of science--or research space--where links connect pairs of fields based on the probability that an individual has published in both of them. We find that the research space is a significantly more accurate predictor of the fields that individuals and organizations will enter in the future than citation based science maps. At the country level, however, the research space and citations based science maps are equally accurate. These findings show that data on career trajectories--the set of fields that individuals have previously published in--provide more accurate predictors of future research output for more focalized units--such as individuals or organizations--than citation based science maps.

73 citations

References
More filters
Journal ArticleDOI
TL;DR: A nonlinear (nonconvex) programming model provides a new definition of efficiency for use in evaluating activities of not-for-profit entities participating in public programs and methods for objectively determining weights by reference to the observational data for the multiple outputs and multiple inputs that characterize such programs.

25,433 citations

Journal ArticleDOI
TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.
Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."

14,941 citations

Journal ArticleDOI
TL;DR: The index h, defined as the number of papers with citation number ≥h, is proposed as a useful index to characterize the scientific output of a researcher.
Abstract: I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher.

8,996 citations

Book
01 Jan 1899
TL;DR: This work focuses on assessing Basic Science Research Departments and Scientific Journals, as well as Empirical and Theoretical Chapters, and the Citation Indexes, which summarize the literature on empirical and theoretical determinants of scientific research.
Abstract: Preface.- Executive Summary.- Part 1 General Introduction and Main Conclusions.- Part 2 Empirical and Theoretical Chapters.- Part 2.1 Assessing Basic Science Research Departments and Scientific Journals.- Part 2.2 The ISI Citation Indexes.- Part 2.3 Assessing Social Sciences and Humanities.- Part 2.4 Accuracy Aspects.- Part 2.5 Theoretical Aspects.- Part 2.6 Citation Analysis and Peer Review.- Part 2.7 Macro Studies.- Part 2.8 New Developments.- References.- Index of Keywords, Cited Works and Cited Authors.

1,366 citations