scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Professional and citizen bibliometrics: complementarities and ambivalences in the development and use of indicators--a state-of-the-art report

01 Dec 2016-Scientometrics (Springer Netherlands)-Vol. 109, Iss: 3, pp 2129-2150
TL;DR: An analytical clarification is proposed by listing an informed set of (sometimes unsolved) problems in bibliometrics which can shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticatedicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.
Abstract: Bibliometric indicators such as journal impact factors, h-indices, and total citation counts are algorithmic artifacts that can be used in research evaluation and management. These artifacts have no meaning by themselves, but receive their meaning from attributions in institutional practices. We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests. These different positions may lead to different and sometimes conflicting perspectives on the meaning and value of the indicators. The indicators can thus be considered as boundary objects which are socially constructed in translations among these perspectives. This paper proposes an analytical clarification by listing an informed set of (sometimes unsolved) problems in bibliometrics which can also shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.

Content maybe subject to copyright    Report

UvA-DARE is a service provided by the library of the University of Amsterdam (http
s
://dare.uva.nl)
UvA-DARE (Digital Academic Repository)
Professional and citizen bibliometrics: complementarities and ambivalences in
the development and use of indicators
a state-of-the-art report
Leydesdorff, L.; Wouters, P.; Bornmann, L.
DOI
10.1007/s11192-016-2150-8
Publication date
2016
Document Version
Final published version
Published in
Scientometrics
License
CC BY
Link to publication
Citation for published version (APA):
Leydesdorff, L., Wouters, P., & Bornmann, L. (2016). Professional and citizen bibliometrics:
complementarities and ambivalences in the development and use of indicators: a state-of-the-
art report.
Scientometrics
,
109
(3), 2129-2150. https://doi.org/10.1007/s11192-016-2150-8
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s)
and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open
content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please
let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material
inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter
to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You
will be contacted as soon as possible.
Download date:26 Aug 2022

Professional and citizen bibliometrics: complementarities
and ambivalences in the development and use
of indicators—a state-of-the-art report
Loet Leydesdorff
1
Paul Wouters
2
Lutz Bornmann
3
Received: 15 September 2016 / Published online: 3 October 2016
The Author(s) 2016. This article is published with open access at Springerlink.com
Abstract Bibliometric indicators such as journal impact factors, h-indices, and total
citation counts are algorithmic artifacts that can be used in research evaluation and
management. These artifacts have no meaning by themselves, but receive their meaning
from attributions in institutional practices. We distinguish four main stakeholders in these
practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who
develop and test indicators; (3) research managers who apply the indicators; and (4) the
scientists being evaluated with potentially competing career interests. These different
positions may lead to different and sometimes conflicting perspectives on the meaning and
value of the indicators. The indicators can thus be considered as boundary objects which
are socially constructed in translations among these perspectives. This paper proposes an
analytical clarification by listing an informed set of (sometimes unsolved) problems in
bibliometrics which can also shed light on the tension between simple but invalid indi-
cators that are widely used (e.g., the h-index) and more sophisticated indicators that are not
used or cannot be used in evaluation practices because they are not transparent for users,
cannot be calculated, or are difficult to interpret.
& Loet Leydesdorff
loet@leydesdorff.net
Paul Wouters
p.f.wouters@cwts.leidenuniv.nl
Lutz Bornmann
bornmann@gv.mpg.de
1
Amsterdam School of Communication Research (ASCoR), University of Amsterdam,
P.O. Box 15793, 1001 NG Amsterdam, The Netherlands
2
Centre for Science and Technology Studies CWTS, Leiden University, P.O. Box 905,
2300 AX Leiden, The Netherlands
3
Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck
Society, Hofgartenstr. 8, 80539 Munich, Germany
123
Scientometrics (2016) 109:2129–2150
DOI 10.1007/s11192-016-2150-8

Keywords Evaluative bibliometrics Scientometric indicators Validity Boundary
object
Introduction
In Toward a Metric of Science: The Advent of Science Indicators (Elkana et al. 1978), the
new field of science indicators and scientometrics was welcomed by a number of authors
from the history and philosophy of science, the sociology of science (among them Robert
K. Merton), and other fields. As the Preface states: ‘Despite our reservations and despite
the obviously fledgling state of ‘science indicator studies,’ the conference was an intel-
lectual success. Discussion was vigorous both inside and outside the formal sessions.’ The
conference on which the volume was based, was organized in response to the first
appearance of the Science Indicators of the US National Science Board in 1972, which in
turn was made possible by the launch of the Science Citation Index in 1964 (de Solla Price
1965; Garfield 1979a).
The reception of scientometrics and scientometric indicators in the community of the
sciences has remained ambivalent during the four decades since then. Science indicators
are in demand as our economies have become increasingly knowledge-based and the
sciences have become capital-intensive and organized on a large scale. In addition to input
indicators for funding schemes (OECD 1963, 1976), output indicators (such as publications
and citations)
1
are nowadays abundantly used to inform research-policy and management
decisions. It is still an open question, however, whether the emphasis on measurement and
transparency in S&T policies and R&D management is affecting the research process
intellectually or only the social forms in which research results are published and com-
municated (van den Daele and Weingart 1975). Is the divide between science as a social
institution (the ‘context of discovery’’) and as intellectually organized (the ‘context of
justification’’) transformed by these changes in the research system? Gibbons et al. (1994),
for example, proposed to consider a third ‘context of application’ as a ‘transdisciplinary’
framework encompassing the other contexts emerging since the ICT (information and
communication technology) revolution. Dahler-Larsen (2011) characterizes the new
regime as ‘the evaluation society.’
In addition to these macro-level developments, research evaluation is increasingly
anticipated in scientific research and scholarly practices of publishing as well as in the
management of universities and research institutes, both intellectually (in terms of peer
review of journals) and institutionally (in terms of grant competition). Thus, management
is no longer external with respect to the shaping of research agendas, and scientometric
indicators are used as a management instrument in these interventions. Responses from
practicing scientists have varied from outright rejection of the application of performance
indicators to an eager celebration of them as a means to open up the ‘old boys’ networks’
of peers; but this varies among disciplines and national contexts. The recent debate in the
UK on the use of research metrics in the Research Excellence Framework provides an
illustration of this huge variation within the scientific and scholarly communities (Wilsdon
et al. 2015). The variety can be explained by a number of factors, such as short-term
interests, disciplinary traditions, the training and educational background of the researchers
1
These indicators are sometimes named output indicators, performance indicators, scientometric indicators,
or bibliometric indicators. We shall use the last term, which focuses on the textual dimension of the output.
2130 Scientometrics (2016) 109:2129–2150
123

involved, and the extent to which researchers are familiar with quantitative methodologies
(Hargens and Schuman 1990).
Publication and citation scores have become ubiquitous instruments in hiring and
promotion policies. Applicants and evaluees can respond by submitting and pointing at
other possible scores, such as those based on Google Scholar (GS) or even in terms of the
disparities between Web of Science (WoS, Thomson Reuters) and Scopus (Elsevier) as two
alternative indicator sources (Mingers and Leydesdorff 2015). In other words, it is not the
intrinsic quality of publications but the schemes for measuring this quality that have
become central to the discussion. Increasingly, the very notion of scientific quality makes
sense only in the context of quality control and quality measurement systems. Moreover,
these measurement systems can be commodified. Research universities in the United States
and the United Kingdom, for example, use services such as Academic Analytics that
provide customers with business intelligence data and solutions at the level of the indi-
vidual faculty. This information is often not accessible to evaluees and third parties, so it
cannot be controlled for possible sources of bias or technical error.
We argue that the ambivalences around the use of bibliometric indicators are not
accidental but inherent to evaluation practices (Rushforth and de Rijcke 2015). In a recent
attempt to specify the ‘rules of the game’ practices for research metrics, Hicks et al.
(2015) proposed using ‘ten principles to guide research evaluation,’ but also warn against
‘morphing the instrument into the goal’ (p. 431). We argue that ‘best practices’ are based
on compromises, but tend to conceal the underlying analytical assumptions and epistemic
differences. From this perspective, bibliometric indicators can be considered as ‘boundary
objects’ that have different implications in different contexts (Gieryn 1983; Star and
Griesemer 1989). Four main groups of actors can be distinguished, each developing its own
perspective on indicators:
1. Producers: The community of indicator producers in which industries (such as
Thomson Reuters and Elsevier) collaborate and exchange roles with small enterprises
(e.g., ScienceMetrix in Montreal; VantagePoint in Atlanta) and dedicated university
centers (e.g., the Expertise Center ECOOM in Leuven; the Center for Science and
Technology Studies CWTS in Leiden). The orientation of the producers is toward the
development and sales of bibliometric products and advice;
2. Bibliometricians: An intellectual community of information scientists (specialized in
‘bibliometrics’’) in which the pros and cons of indicators are discussed, and
refinements are proposed and tested. The context of bibliometricians is theoretically
and empirically driven research on bibliometric questions, sometimes in response to,
related to, or even in combination with commercial services;
3. Managers: Research management periodically and routinely orders bibliometric
assessments from the (quasi-)industrial centers of production. The context of these
managers is the competition for resources among research institutes and groups.
Examples of the use of bibliometrics by managers are provided by Kosten (2016);
4. Scientists: The scientists under study who can be the subject of numerous evaluations.
Nowadays, many of them keep track of their citation records and the value of their
performance indicators such as the h-index. Practicing scientists are usually not
interested in bibliometric indicators per se, but driven by the necessity to assess and
compare research performance quantitatively in the competition for reputation and
resources.
The public discourse about research evaluation and performance indicators is mainly
shaped by translation processes in the communications among these four groups. The
Scientometrics (2016) 109:2129–2150 2131
123

translations may move the discussion from a defensive one (e.g., ‘one cannot use these
indicators in the humanities’’) to a specification of the conditions under which assessments
can be accepted as valid, and the purposes for which indicators might legitimately be used.
Under what conditions—that is, on the basis of which answers to questions—is the use of
certain types of indicators justifiable in practice? However, from the perspective of
developing bibliometrics as a specialty, one can also ask: under what conditions is the use
of specific indicators conducive to the creation of new knowledge or innovative devel-
opments? If one instead zooms in on the career structures in science, the question might be:
what indicators make the work of individual researchers visible or invisible, and what kind
of stimulus does this give to individual scholars?
In the following, we list a number of important ambivalences around the use of bib-
liometric indicators in research evaluation. The examples are categorized using two major
topics: the data and the indicators used in scientometrics. In relation to these topics,
tensions among the four groups acting in various contexts can be specified. The main
tension can be expected between bibliometric assessments that can be used by management
with potentially insufficient transparency, versus the evaluees who may wish to use
qualitative schemes for the evaluation. Evaluees may feel unfairly treated when they can
show what they consider as ‘errors’ or ‘erroneous assumptions’ in the evaluations (e.g.,
Spaan 2010). However, evaluation is necessarily based on assumptions. These may seem
justified from one perspective, whereas they may appear erroneous from a different one. In
other words, divergent evaluations are always possible. A clarification of some of the
technical assumptions and possible sources of error may contribute to a more informed
discussion about the limitations of research evaluation.
Ambivalences around the data
Despite the considerable efforts of the database providers to deliver high-quality and
disambiguated data, a number of ambivalences with respect to bibliometric data have
remained.
Sources of bibliometric data
WoS and Scopus are the two established literature databases for bibliometric studies. Both
databases are transparent in the specification of the publications and cited references that
are routinely included. Producers (group 1) and bibliometricians (group 2) claim that both
databases can be used legitimately for evaluative purposes in the natural and life sciences,
but may be problematic in many social sciences and humanities (Larivie
`
re et al. 2006;
Nederhof 2006). The publication output in these latter domains would not be sufficiently
covered. However, both providers make systematic efforts to cover more literature (notably
books) in the social sciences and humanities.
With the advent of Google Scholar (GS) in 2004, the coverage problem may have
seemed to be solved for all disciplines. GS has become increasingly popular among
managers and scientists (groups 3 and 4). Important reasons for using GS are that it is
freely available and comprehensive. Conveniently, the citation scores retrieved from GS
are always higher than those from WoS and Scopus because the coverage is larger by an
order of magnitude. However, it has remained largely unknown on which set of documents
2132 Scientometrics (2016) 109:2129–2150
123

Citations
More filters
01 Jan 1995
TL;DR: In this paper, the authors propose a method to improve the quality of the data collected by the data collection system. But it is difficult to implement and time consuming and computationally expensive.
Abstract: 本文对国际科学计量学杂志《Scientometrics》1979-1991年的研究论文内容、栏目、作者及国别和编委及国别作了计量分析,揭示出科学计量学研究的重点、活动的中心及发展趋势,说明了学科带头人在发展科学计量学这门新兴学科中的作用。

1,636 citations

Journal Article
TL;DR: In this article, the secret to improve the quality of life by reading this group-based modeling of development is found, which is a kind of book that you need now, and it can be your favorite book to read after having this book.
Abstract: Find the secret to improve the quality of life by reading this group based modeling of development. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how well-known the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.

864 citations

Journal ArticleDOI
TL;DR: Citations are increasingly used as performance indicators in research policy and within the research system as mentioned in this paper, and it is argued that citations reflect aspects related to scientific impact and relevance, although with important limitations.
Abstract: Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality? These and similar issues have been addressed through several decades of scientometric research. This article provides an overview of some of the main issues at stake, including theories of citation and the interpretation and validity of citations as performance measures. Research quality is a multidimensional concept, where plausibility/soundness, originality, scientific value, and societal value commonly are perceived as key characteristics. The article investigates how citations may relate to these various research quality dimensions. It is argued that citations reflect aspects related to scientific impact and relevance, although with important limitations. On the contrary, there is no evidence that citations reflect other key dimensions of research quality. Hence, an increased use of citation indicators in research evaluation and funding may imply less attention to these other research quality dimensions, such as solidity/plausibility, originality, and societal value.

517 citations


Cites background or methods from "Professional and citizen bibliometr..."

  • ..., 2 or 3 years) means that contributions to the current research front are appreciated more than long-term impact (Leydesdorff et al., 2016)....

    [...]

  • ...…short-term citation rates can be considered as predictor of long-term rates will vary (Baumgartner & Leydesdorff, 2014) and using short-term windows (e.g., 2 or 3 years) means that contributions to the current research front are appreciated more than long-term impact (Leydesdorff et al., 2016)....

    [...]

  • ...…articles and cited references (misspellings of journal names, author names, errors in the reference lists, etc.), and mistakes in the indexing procedures conducted by Clarivate Analytics (previously Thomson Reuters) or Elsevier (Leydesdorff et al., 2016; Moed, 2002) may confuse citation analyses....

    [...]

  • ...Problems related to more technical issues, such as discrepancies between target articles and cited references (misspellings of journal names, author names, errors in the reference lists, etc.), and mistakes in the indexing procedures conducted by Clarivate Analytics (previously Thomson Reuters) or Elsevier (Leydesdorff et al., 2016; Moed, 2002) may confuse citation analyses....

    [...]

  • ...), and mistakes in the indexing procedures conducted by Clarivate Analytics (previously Thomson Reuters) or Elsevier (Leydesdorff et al., 2016; Moed, 2002) may confuse citation analyses....

    [...]

01 Jan 2016
TL;DR: Citation indexing its theory and application in science technology and humanities and instead of enjoying a good book with a cup of tea in the afternoon, instead they juggled with some infectious virus inside their computer.
Abstract: Thank you very much for downloading citation indexing its theory and application in science technology and humanities. As you may know, people have search hundreds times for their favorite novels like this citation indexing its theory and application in science technology and humanities, but end up in infectious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they juggled with some infectious virus inside their computer.

300 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present an all-inclusive description of the two main bibliographic DBs by gathering the findings that are presented in the most recent literature and information provided by the owners of the DBs at one place.
Abstract: Nowadays, the importance of bibliographic databases (DBs) has increased enormously, as they are the main providers of publication metadata and bibliometric indicators universally used both for research assessment practices and for performing daily tasks. Because the reliability of these tasks firstly depends on the data source, all users of the DBs should be able to choose the most suitable one. Web of Science (WoS) and Scopus are the two main bibliographic DBs. The comprehensive evaluation of the DBs’ coverage is practically impossible without extensive bibliometric analyses or literature reviews, but most DBs users do not have bibliometric competence and/or are not willing to invest additional time for such evaluations. Apart from that, the convenience of the DB’s interface, performance, provided impact indicators and additional tools may also influence the users’ choice. The main goal of this work is to provide all of the potential users with an all-inclusive description of the two main bibliographic DBs by gathering the findings that are presented in the most recent literature and information provided by the owners of the DBs at one place. This overview should aid all stakeholders employing publication and citation data in selecting the most suitable DB.

267 citations

References
More filters
Journal ArticleDOI
TL;DR: The index h, defined as the number of papers with citation number ≥h, is proposed as a useful index to characterize the scientific output of a researcher.
Abstract: I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher.

8,996 citations


"Professional and citizen bibliometr..." refers methods in this paper

  • ...Before the introduction of the h-index in 2005 (Hirsch 2005)—to be discussed in a next section—metrics as a tool for research management had become almost identical with the use of JIFs, particularly in the biomedical and some social sciences....

    [...]

  • ...Before the introduction of the h-index in 2005 (Hirsch 2005)—to be discussed in a next...

    [...]

  • ...In 2005, the h-index was introduced for measuring the performance of single scientists (Hirsch 2005)....

    [...]

  • ...The h-index and its derivates In 2005, the h-index was introduced for measuring the performance of single scientists (Hirsch 2005)....

    [...]

Book
01 Jan 1987
TL;DR: In this article, the quandary of the fact-builder is explored in the context of science and technology in a laboratory setting, and the model of diffusion versus translation is discussed.
Abstract: Acknowledgements Introduction Opening Pandora's Black Box PART I FROM WEARER TO STRONGER RHETORIC Chapter I Literature Part A: Controversies Part B: When controversies flare up the literature becomes technical Part C: Writing texts that withstand the assaults of a hostile environment Conclusion: Numbers, more numbers Chapter 2 Laboratories Part A: From texts to things: A showdown Part B: Building up counter-laboratories Part C: Appealing (to) nature PART II FROM WEAR POINTS TO STRONGHOLDS Chapter 3 Machines Introduction: The quandary of the fact-builder Part A: Translating interests Part B: Keeping the interested groups in line Part C: The model of diffusion versus the model of translation Chapter 4 Insiders Out Part A: Interesting others in the laboratories Part B: Counting allies and resources PART III FROM SHORT TO LONGER NETWORKS Chapter 5 Tribunals of Reason Part A: The trials of rationality Part B: Sociologics Part C: Who needs hard facts? Chapter 6 Centres of calculation Prologue: The domestication of the savage mind Part A: Action at a distance Part B: Centres of calculation Part C: Metrologies Appendix 1

8,173 citations

01 Jan 2014
TL;DR: A model of how one group of actors managed this tension between divergent viewpoints was presented, drawing on the work of amateurs, professionals, administrators and others connected to the Museum of Vertebrate Zoology at the University of California, Berkeley, during its early years.
Abstract: Scientific work is heterogeneous, requiring many different actors and viewpoints. It also requires cooperation. The two create tension between divergent viewpoints and the need for generalizable findings. We present a model of how one group of actors managed this tension. It draws on the work of amateurs, professionals, administrators and others connected to the Museum of Vertebrate Zoology at the University of California, Berkeley, during its early years. Extending the Latour-Callon model of interessement, two major activities are central for translating between viewpoints: standardization of methods, and the development of 'boundary objects'. Boundary objects are both adaptable to different viewpoints and robust enough to maintain identity across them. We distinguish four types of boundary objects: repositories, ideal types, coincident boundaries and standardized forms.

7,800 citations


"Professional and citizen bibliometr..." refers background in this paper

  • ...From this perspective, bibliometric indicators can be considered as ‘‘boundary objects’’ that have different implications in different contexts (Gieryn 1983; Star and Griesemer 1989)....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors present a model of how one group of actors managed the tension between divergent viewpoints and the need for generalizable findings in scientific work, and distinguish four types of boundary objects: repositories, ideal types, coincident boundaries and standardized forms.
Abstract: Scientific work is heterogeneous, requiring many different actors and viewpoints. It also requires cooperation. The two create tension between divergent viewpoints and the need for generalizable findings. We present a model of how one group of actors managed this tension. It draws on the work of amateurs, professionals, administrators and others connected to the Museum of Vertebrate Zoology at the University of California, Berkeley, during its early years. Extending the Latour-Callon model of interessement, two major activities are central for translating between viewpoints: standardization of methods, and the development of `boundary objects'. Boundary objects are both adaptable to different viewpoints and robust enough to maintain identity across them. We distinguish four types of boundary objects: repositories, ideal types, coincident boundaries and standardized forms.

7,634 citations

Book
01 Jan 1994
TL;DR: The authors argued that the ways in which knowledge is produced are undergoing fundamental changes at the end of the twentieth century and that these changes mark a distinct shift into a new mode of knowledge production which is replacing or reforming established institutions, disciplines, practices and policies.
Abstract: In this provocative and broad-ranging work, a distinguished team of authors argues that the ways in which knowledge — scientific, social and cultural — is produced are undergoing fundamental changes at the end of the twentieth century. They claim that these changes mark a distinct shift into a new mode of knowledge production which is replacing or reforming established institutions, disciplines, practices and policies. Identifying a range of features of the new moder of knowledge production — reflexivity, transdisciplinarity, heterogeneity — the authors show the connections between these features and the changing role of knowledge in social relations. While the knowledge produced by research and development in science and technology (both public and industrial) is accorded central concern, the authors also outline the changing dimensions of social scientific and humanities knowledge and the relations between the production of knowledge and its dissemination through education. Placing science policy and scientific knowledge in its broader context within contemporary societies, this book will be essential reading for all those concerned with the changing nature of knowledge, with the social study of science, with educational systems, and with the relations between R&D and social, economic and technological development.

7,486 citations


"Professional and citizen bibliometr..." refers background in this paper

  • ...Gibbons et al. (1994), for example, proposed to consider a third ‘‘context of application’’ as a ‘‘transdisciplinary’’ framework encompassing the other contexts emerging since the ICT (information and communication technology) revolution....

    [...]