scispace - formally typeset
Open AccessJournal ArticleDOI

The success -index: an alternative approach to the h -index for evaluating an individual’s research output

Reads0
Chats0
TLDR
The success-index is introduced, aimed at reducing the NSP-index’s limitations, although requiring more computing effort, and a detailed analysis of it from the point of view of its operational properties and a comparison with the h-index's ones is presented.
Abstract
Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481–485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning—equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index’s limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index’s ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index’s. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc.

read more

Content maybe subject to copyright    Report

10 August 2022
POLITECNICO DI TORINO
Repository ISTITUZIONALE
The success-index: an alternative approach to the h-index for evaluating an individual’s research output / Franceschini,
Fiorenzo; Galetto, Maurizio; Maisano, DOMENICO AUGUSTO FRANCESCO; Mastrogiacomo, Luca. - In:
SCIENTOMETRICS. - ISSN 0138-9130. - STAMPA. - 92:n.3(2012), pp. 621-641. [10.1007/s11192-011-0570-z]
Original
The success-index: an alternative approach to the h-index
for evaluating an individual’s research output
Publisher:
Published
DOI:10.1007/s11192-011-0570-z
Terms of use:
openAccess
Publisher copyright
(Article begins on next page)
This article is made available under terms and conditions as specified in the corresponding bibliographic description in
the repository
Availability:
This version is available at: 11583/2502047 since:
Springer Verlag Germany:Tiergartenstrasse 17, D 69121 Heidelberg Germany:011 49 6221 3450, EMAIL:

1
The success-index: an alternative approach to the h-index for evaluating an
individual’s research output
Fiorenzo Franceschini
1
, Maurizio Galetto, Domenico Maisano, Luca Mastrogiacomo
1
fiorenzo.franceschini@polito.it
Politecnico di Torino, DISPEA (Department of Production Systems and Business Economics),
Corso Duca degli Abruzzi 24, 10129, Torino (Italy)
Abstract
Among the most recent bibliometric indicators for normalizing the differences among fields of
science in terms of citation behaviour, Kosmulski (Journal of Informetrics 5(3):481–485, 2011)
proposed the NSP (Number of Successful Paper) index. According to the authors, NSP deserves
much attention for its great simplicity and immediate meaning – equivalent to those of the h-index –
while it has the disadvantage of being prone to manipulation and not very efficient in terms of
statistical significance.
In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index’s
limitations, although requiring more computing effort. Next, we present a detailed analysis of the
success-index from the point of view of its operational properties and a comparison with the h-
index’s ones. Particularly interesting is the examination of the success-index scale of measurement,
which is much richer than the h-index’s. This makes success-index much more versatile for
different types of analysis – e.g., (cross-field) comparisons of the scientific output of (1) individual
researchers, (2) researchers with different seniority, (3) research institutions of different size, (4)
scientific journals, etc..
Keywords: Successful paper, NSP-index, Field normalization, Reference practices, Operational
properties, Hirsch index.
1. Introduction
Defining bibliometric impact measures, which allow cross-field normalization without being
influenced by the different propensity to cite, is an age-old problem still much debated [Garfield,
1979a]. Many indicators, mostly related to scientific journals, have been proposed over the years.
Not surprisingly, a common feature of these indicators is that they are based on the comparison
between (1) the amount of citations received by a group of publications examined and (2) a
comparison term given by the expected number (or another indicator of central tendency) of the
citations received or made by analogous publications in the specific discipline(s) of interest. At the
risk of oversimplifying, the cross-field normalized impact indicators proposed in the literature differ
in three main features:

2
1. A first distinction is about how the comparison term is determined. In the so-called target or
cited-side normalization, this term is given by the number of citations received, on average, by
a reference sample of publications within the discipline(s) of interest. In the so-called source or
citing-side normalization, it is obtained using the average number of citations made (i.e.,
bibliographic references) by a reference sample of publications, which is a widely accepted
estimator of the “field’s propensity to cite” [Moed, 2010a; 2010b].
2. A second distinction concerns the “moment” in which the normalization is performed. A first
option is to build an indicator based on the citations received by a group of reference
publications and subsequently normalize it (a posteriori normalization). Another solution is to
immediately normalize citations before joining them by an aggregated indicator (a priori
normalization or fractional counting) [Pinski and Narin, 1976; Zitt and Small, 2008;
Leydesdorff and Opthof, 2010; Glänzel et al., 2011].
3. Another distinctive feature is the selection of a reference sample of publications for
determining the comparison term, based on the citations received or made by the selected
publications. This is probably one of the trickiest and most controversial aspects. Some
techniques are classification-dependent, since they are based on a superimposed delineation of
fields of science, e.g., that one provided by ISI-Thomson Reuters or other bibliometric
databases [Moed, 2010b; Leydesdorff and Shin, 2011]. Other techniques are more “adaptive”,
since the sample is determined considering the “neighbourhood” of the publication(s) of
interest – typically consisting of the set of publications citing or being cited by them [Jackson
and Rogers, 2007; Waltman et al. 2011a; Waltman et al. 2011b].
In addition, the normalization approaches can be differentiated by secondary aspects, such as (1) the
size of the time-window for counting the citations received (or made) by the publications examined,
(2) the way of calculating the comparison term, (3) the way of determining the neighbourhood of a
publication (e.g., given by the “neighbour” publications only or by the “neighbours of the
neighbours”, etc.).
Despite the apparent differentiation among the approaches to normalization, we believe that – if
examined carefully – their structure is not so dissimilar. For example, even the most sui generis
normalised impact measures, such as those based on the fractional counting, can be reduced to the
ratio between the number of citations received by one or more publications of interest and a
comparison term represented by a central tendency indicator (specifically, the harmonic mean)
associated to the citations received or made by a sample of “external” publications within the field
concerned [Small and Sweeney, 1985; Zitt and Small, 2008; Guns and Rousseau 2010; Zhou and
Leydesdorff, 2011].
Most of the normalised indicators have been historically defined at journal level, with the aim of

3
enabling cross-field comparisons [Garfield, 1979b]. However, with appropriate adaptations, they
can be applied to the publications of individual scientists, enabling comparisons among scientists
from different disciplines.
It is worth noting that the cross-field normalization can also apply at level of a scientist’s individual
publications. It is not so rare for a scientist to be involved in research topics at the boundary
between fields having different propensities to cite. A plethora of examples can be found: e.g.,
Energetics and Chemistry, Biomedical Engineering and Medicine, Social Sciences and Statistics,
etc..
Despite its relative recentness, the h-index is probably the most in vogue among the indicators used
for evaluating the performance of individual scientists [Hirsch, 2005]. One of the most important
merits behind its success and popularity is the great simplicity and the immediate intuitive meaning
[Franceschini and Maisano, 2010a]. Whereas, an important drawback is that it does not allow
comparisons among scientists of different disciplines. Several attempts to resolve this limitation,
like those of Iglesias and Pecharromán (2007) and Batista et al. (2006), have been made, but the
original simplicity and immediacy of h is often undermined by the corrections introduced for its
normalization [Franceschini and Maisano, 2010a; 2010c; 2011]. Moreover, the proposed
normalizations are based on the assumption that the whole scientific production of a scientist is
“homogeneous”, i.e., all the publications are included within the same discipline.
In a recent paper in Journal of Informetrics, Kosmulski (2011) introduced a new interesting
indicator called Number of Successful Paper (NSP). Similarly to h, NSP makes it possible to
“isolate” a subset of publications, defined as “successful papers”, among a group of publications
examined – e.g., those associated to a scientist or a journal. Precisely, a publication is classified as
successful when it has received more citations than those made.
In other words, a score is associated to each (i-th) of the (P) publications of interest:
1 when
0otherwise
=>
=
iii
i
score c r
score
(1)
where c
i
are the citations received and r
i
the citations made by the i-th publication.
NSP is defined as:
1=
=
P
i
i
NSP score . (2)
What is most fascinating about
NSP is the simplicity and immediateness of meaning, comparable to
those of the
h-index.
The purpose of this paper is to make explicit some reflections on the Kosmulski’s NSP-index,
whose structure is particularly interesting among the current normalized impact measures. The
authors show that NSP appears superior to h in terms of some operational properties and has the

4
potential to replace it in certain contexts.
The remaining of this paper is organised as follows. Section 2 contains a short digression on the
practice of publishing and citing in the scientific literature. This will be helpful in understanding the
similarities and potential limitations of the normalized impact measures in general, and to prepare
the field for the subsequent discussion on NSP. Section 3 reports a critical analysis of the NSP-
index, suggesting some modifications and a new variant (the success-index). Section 4 proposes a
structured comparison between the success-index and the h-index, on the basis of their major
operational properties, focusing on the relevant pros and cons.
Finally, the conclusions are given, emphasizing the potentialities of the success-index and
summarising the original contribution of the paper.
2. Different disciplines with different propensities to cite
Assuming that we look into the whole scientific literature, including the totality of the journal
publications, monographs, book chapters, conference proceedings and other types of publications
ever issued, we may represent this “universe” – from the citational point of view – as a graph
formed by a dense network of interconnected nodes (see the simplified representation of Fig. 1).
Each (i-th) node represents a scientific paper (P
i
), while arcs represent the relevant citations.
Precisely, incoming arrows are the (c
i
) citations received by other papers and outgoing arrows are
the (r
i
) citations made (or bibliographic references) to other papers. From the viewpoint of a generic
node/paper, the practice of citing can be therefore seen actively or passively.
The meaning of the citation, as an expression of interest in a (cited) publication by another (citing)
publication has been debated in the literature for many decades [Small, 2004]. Citation is
traditionally considered in its passive acceptation as a credit or acknowledgement received from
other publications in the literature. In this sense, the mostly cited publications are likely to be those
of greater impact.
For the determination of a comparison term that represents the propensity to cite, there is no
apparent difference between using the average citations received or made by a sample of scientific
papers. The following example clarifies this aspect.
Assuming that the totality of the (P) publications of the scientific literature are captured by means
of an ideal "omni-covering" bibliometric database – i.e., able to index all the possible scientific
publications – and assuming that no citations from these publications are directed to “external
resources” – i.e., non-scientific publications, such as patents, websites, etc...), the total citations (C)
will perfectly balance the total references (R) (see Eq. 3). In fact, they are exactly the same thing,
i.e., the arcs of the graph.

Figures
Citations
More filters
Journal ArticleDOI

A review of the literature on citation impact indicators

TL;DR: In this paper, an in-depth review of the literature on citation impact indicators is provided, focusing on the selection of publications and citations to be included in the calculation of citation impact indicator.
Posted Content

A review of the literature on citation impact indicators

TL;DR: An in-depth review of the literature on citation impact indicators with recommendations for future research on normalization for field differences and counting methods for dealing with co-authored publications.
Journal ArticleDOI

How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations

TL;DR: This study aims to set up proposals how to evaluate individual researchers working in the natural and life sciences, and includes recommendations for a set of indicators to be used for evaluating researchers.
Journal ArticleDOI

How good is research really?: Measuring the citation impact of publications with percentiles increases correct assessments and fair comparisons

TL;DR: Bibliometrics increasingly determine the allocation of jobs and funding in science and bibliometricians must develop and adopt reliable measures of quality that truly reflect a scientist's contribution to his or her field.
Book ChapterDOI

Field Normalization of Scientometric Indicators

TL;DR: This work discusses the main issues in the design of field-normalized indicators, and presents an overview of different approaches that have been developed for dealing with the problem of field normalization.
References
More filters
Journal ArticleDOI

An index to quantify an individual's scientific research output

TL;DR: The index h, defined as the number of papers with citation number ≥h, is proposed as a useful index to characterize the scientific output of a researcher.
Book

Citation indexing - its theory and application in science, technology, and humanities

TL;DR: Citation indexing-its theory and application in science, technology, and humanities, Citation indexing (Citation Indexing) (CIFS), مرکز فناوری اطلاعات (Citations Indexing),
Journal ArticleDOI

Is citation analysis a legitimate evaluation tool

Eugene Garfield
- 01 May 1979 - 
TL;DR: It is concluded that as the scientific enterprise becomes larger and more complex, and its role in society more critical, it will become more difficult, expensive and necessary to evaluate and identify the largest contributors.
Journal ArticleDOI

Meeting Strangers and Friends of Friends: How Random are Social Networks?

TL;DR: It is shown that as the random/network-based meeting ratio varies, the resulting degree distributions can be ordered in the sense of stochastic dominance, which allows us to infer how the formation process affects average utility in the network.
Related Papers (5)
Frequently Asked Questions (4)
Q1. What is the way to limit the use of indicators?

the use of indicators influenced by the different propensity to cite – as the h-index – should be limited to evaluations within each group, in order to avoid “comparisons between apples and oranges”. 

since changes in the propensity to cite generally require a large number of years (hardly less than 10-15 years [Bornmann, 2011; Kranpen, 2010]), the result ofthis approximation is not very distorted. 

Kosmulski (2011) suggested some refinements to reduce the previous limitations (for example, using citation and publication windows giving low importance to old citations or articles and vice-versa) but the authors think that they undermine the original simplicity and immediacy of NSP (and consequently the success-index’s). 

According to the authors, this construction may be questionable for two main reasons:1. As known from Statistics, the smaller the sample population, the larger thevariability/uncertainty associated with the estimation of a parameter.