scispace - formally typeset
Search or ask a question
Journal ArticleDOI

New seniority-independent Hirsch-type index

01 Oct 2009-Journal of Informetrics (Elsevier)-Vol. 3, Iss: 4, pp 341-347
TL;DR: In this paper, the authors defined the following seniority-independent Hirsch-type index, which is suitable to compare the scientific output of scientists in different ages: a scientist has index hpd if hpd of his/her papers have at least hpd citations per decade each, and his/his other papers have less than hpd + 1 citations per 10 years each.
About: This article is published in Journal of Informetrics.The article was published on 2009-10-01. It has received 21 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper reviews 108 indicators that can potentially be used to measure performance on individual author-level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.
Abstract: An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on individual author-level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application. As such we provide a schematic overview of author-level indicators, where the indicators are broadly categorised into indicators of publication count, indicators that qualify output (on the level of the researcher and journal), indicators of the effect of output (effect as citations, citations normalized to field or the researcher's body of work), indicators that rank the individual's work and indicators of impact over time. Supported by an extensive appendix we present how the indicators are computed, the complexity of the mathematical calculation and demands to data-collection, their advantages and limitations as well as references to surrounding discussion in the bibliometric community. The Appendix supporting this study is available online as supplementary material.

181 citations

Journal ArticleDOI
TL;DR: Despite certain flaws and weaknesses, the h‐index provides a better way to assess long‐term performance of articles or authors than using a journal's impact factor, and it provides an alternative way to assessment a journal’s long-term ranking.
Abstract: There is considerable debate on the use and abuse of journal impact factors and on selecting the most appropriate indicator to assess research outcome for an individual or group of scientists. Internet searches using Web of Science and Scopus were conducted to retrieve citation data for an individual in order to calculate nine variants of Hirsch's h-index. Citations to articles published in a wide range of psychiatric journals in the periods 1995-99 and 2000-05 were analyzed using Web of Science. Comparisons were made between journal impact factor, h-index of citations from publication to 2008, and the proportion of articles cited at least 30 or 50 times. For up to 14 years post-publication, there was a strong positive relationship between journal impact factor and h-index for citations received. Journal impact factor was also compared to the percentage of articles cited at least 30 or 50 times-a comparison that showed wide variations between journals with similar impact factors. This study found that 40%-50% of the articles published in the top ten psychiatry journals ranked by impact factor acquire 30 to 50 citations within ten to fifteen years. Despite certain flaws and weaknesses, the h-index provides a better way to assess long-term performance of articles or authors than using a journal's impact factor, and it provides an alternative way to assess a journal's long-term ranking.

75 citations

Journal ArticleDOI
TL;DR: A relative bidimensional index is proposed that takes into account both the net production and the quality of it, as an attempt to provide a comprehensive and objective way to compare the research output of different institutions in a specific field, using journal contributions and citations.
Abstract: The problem of comparing academic institutions in terms of their research production is nowadays a priority issue. This paper proposes a relative bidimensional index that takes into account both the net production and the quality of it, as an attempt to provide a comprehensive and objective way to compare the research output of different institutions in a specific field, using journal contributions and citations. The proposed index is then applied, as a case study, to rank the top Spanish universities in the fields of Chemistry and Computer Science in the period ranging from 2000 until 2009. A comparison with the top 50 universities in the ARWU rankings is also made, showing the proposed ranking is better suited to distinguish among non-elite universities.

47 citations


Cites background from "New seniority-independent Hirsch-ty..."

  • ...Other individual researcher indices have also been proposed extending the h-index, such as the g-index (Egghe 2006), q2-index (Cabrerizo et al. 2010), and others (Bornmann et al. 2010; Kosmulski 2009)....

    [...]

Journal ArticleDOI
TL;DR: The assessment based on the number of SP produces comparable scores for scientists working in different disciplines of science, and in different countries.

42 citations

Journal ArticleDOI
TL;DR: The success-index is introduced, aimed at reducing the NSP-index’s limitations, although requiring more computing effort, and a detailed analysis of it from the point of view of its operational properties and a comparison with the h-index's ones is presented.
Abstract: Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481–485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning—equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index’s limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index’s ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index’s. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc.

41 citations

References
More filters
Journal ArticleDOI
TL;DR: Owing to the availability and utility of the IF, promotion committees, funding agencies and scientists have taken to using it as a shorthand assessment of the quality of scientists or institutions, rather than only journals.
Abstract: How does one measure the quality of science? The question is not rhetorical; it is extremely relevant to promotion committees, funding agencies, national academies and politicians, all of whom need a means by which to recognize and reward good research and good researchers. Identifying high‐quality science is necessary for science to progress, but measuring quality becomes even more important in a time when individual scientists and entire research fields increasingly compete for limited amounts of money. The most obvious measure available is the bibliographic record of a scientist or research institute—that is, the number and impact of their publications. > Identifying high‐quality science is necessary for science to progress… Currently, the tool most widely used to determine the quality of scientific publications is the journal impact factor (IF), which is calculated by the scientific division of Thomson Reuters (New York, NY, USA) and is published annually in the Journal Citation Reports (JCR). The IF itself was developed in the 1960s by Eugene Garfield and Irving H. Sher, who were concerned that simply counting the number of articles a journal published in any given year would miss out small but influential journals in their Science Citation Index (Garfield, 2006). The IF is the average number of times articles from the journal published in the past two years have been cited in the JCR year and is calculated by dividing the number of citations in the JCR year—for example, 2007—by the total number of articles published in the two previous years—2005 and 2006. Owing to the availability and utility of the IF, promotion committees, funding agencies and scientists have taken to using it as a shorthand assessment of the quality of scientists or institutions, rather than only journals. As Garfield has noted, this use of the IF is often necessary, owing to time …

373 citations


"New seniority-independent Hirsch-ty..." refers background in this paper

  • ...Self-citations, multi-author papers, and comparison of scientists working in different fields are other problems, which have been widely discussed (Bornmann & Daniel, 2009; Egghe, 2008; Schreiber, 2007, 2008)....

    [...]

Journal ArticleDOI
TL;DR: In this article, a modification of the Hirsch index was proposed to take multiple coauthorship appropriately into account, and the effect of this procedure was compared with other variants of the h-index and found to be superior to the fractionalised counting of citations and to the normalization of the average number of authors in the H-core.

177 citations

Journal ArticleDOI
01 May 2007-EPL
TL;DR: In this paper, the authors propose to sharpen the index h, suggested by Hirsch as a useful index to characterize the scientific output of a researcher, by excluding the self-citations.
Abstract: I propose to sharpen the index h, suggested by Hirsch as a useful index to characterize the scientific output of a researcher, by excluding the self-citations. Performing a self-experiment and also discussing in detail two anonymous data sets, it is shown that self-citations can significantly reduce the h index in contrast to Hirsch's expectations. This result is confirmed by an analysis of 13 further data sets.

172 citations


"New seniority-independent Hirsch-ty..." refers background in this paper

  • ...For example, among 16 citation records analyzed in (Schreiber, 2007), in 13 cases the ratio of the number of citations of the most cited paper to the number of papers published was in the range from 0.6 to 2.6, and the extreme values were 0.23, and 4.4 (a factor of 4, but still the same order of…...

    [...]

  • ...Self-citations, multi-author papers, and comparison of scientists working in different fields are other problems, which have been widely discussed (Bornmann & Daniel, 2009; Egghe, 2008; Schreiber, 2007, 2008)....

    [...]

Journal ArticleDOI
TL;DR: This study is focussed on a large-scale analysis of the citation history of all papers indexed in the 1980 annual volume of the Science Citation Index to analyse whether the share of delayed recognition papers is significant and whether such papers are typical of the work of their authors at that time.
Abstract: According to Garfield (1980),most scientists can name an example of an important discovery that had little initial impact on contemporary research. And he uses Mendel's work a classical example. Delayed recognition is sometimes used by scientists as an argument against citation-based indicators based on citation windows defined for a short- or medium-term initial period beginning with the paper's publication year. This study is focussed on a large-scale analysis of the citation history of all papers indexed in the 1980 annual volume of the Science Citation Index. The objective is two-fold, particularly, to analyse whether the share of delayed recognition papers is significant and whether such papers are typical of the work of their authors at that time. In a first step, the background of advanced bibliometric models by Glanzel, Egghe, Rousseau and Burrell of stochastic citation processes and first-citation distributions is described briefly. The second part is devoted to the bibliometric analysis of first-citation statistics and of the phenomenon of citation delay. In a third step, finally, delayed reception publications have been studied individually. Their topics and the citation patterns of other papers by the same authors have been studied to uncover principles of regularity or exceptionality of delayed reception publications.

169 citations

Journal IssueDOI
Leo Egghe1
TL;DR: The h-index (Hirsch index) and the g-index of authors, in case one counts authorship of the cited articles in a fractional way, are studied.
Abstract: This article studies the h-index (Hirsch index) and the g-index of authors, in case one counts authorship of the cited articles in a fractional way. There are two ways to do this: One counts the citations to these papers in a fractional way or one counts the ranks of the papers in a fractional way as credit for an author. In both cases, we define the fractional h- and g-indexes, and we present inequalities (both upper and lower bounds) between these fractional h- and g-indexes and their corresponding unweighted values (also involving, of course, the coauthorship distribution). Wherever applicable, examples and counterexamples are provided. In a concrete example (the publication citation list of the present author), we make explicit calculations of these fractional h- and g-indexes and show that they are not very different from the unweighted ones. © 2008 Wiley Periodicals, Inc.

149 citations


"New seniority-independent Hirsch-ty..." refers background in this paper

  • ...Self-citations, multi-author papers, and comparison of scientists working in different fields are other problems, which have been widely discussed (Bornmann & Daniel, 2009; Egghe, 2008; Schreiber, 2007, 2008)....

    [...]