scispace - formally typeset
Search or ask a question

Showing papers on "Citation impact published in 2008"


Journal IssueDOI
TL;DR: In this paper, the citation and publication practices of non-Institute of Scientific Information (ISI) journals are compared to those of specialist journals publishing in languages other than English.
Abstract: Aging of publications, percentage of self-citations, and impact vary from journal to journal within fields of science. The assumption that citation and publication practices are homogenous within specialties and fields of science is invalid. Furthermore, the delineation of fields and among specialties is fuzzy. Institutional units of analysis and persons may move between fields or span different specialties. The match between the citation index and institutional profiles varies among institutional units and nations. The respective matches may heavily affect the representation of the units. Non-Institute of Scientific Information (ISI) journals are increasingly cornered into “transdisciplinary” Mode-2 functions with the exception of specialist journals publishing in languages other than English. An “externally cited impact factor” can be calculated for these journals. The citation impact of non-ISI journals will be demonstrated using Science and Public Policy as the example. © 2008 Wiley Periodicals, Inc.

228 citations


Journal ArticleDOI
TL;DR: The research access/impact problem arises because journal articles are not accessible to all of their would-be users; hence, they are losing potential research impact as mentioned in this paper. The solution is to make all articles open access (OA, i.e., accessible online, free for all).

213 citations


Journal ArticleDOI
TL;DR: An important corollary from this study is that Google Scholar’s wider coverage of Open Access (OA) web documents is likely to give a boost to the impact of OA research and the OA movement.
Abstract: For practical reasons, bibliographic databases can only contain a subset of the scientific literature. The ISI citation databases are designed to cover the highest impact scientific research journals as well as a few other sources chosen by the Institute for Scientific Information (ISI). Google Scholar also contains citation information, but includes a less quality controlled collection of publications from different types of web documents. We define Google Scholar unique citations as those retrieved by Google Scholar which are not in the ISI database. We took a sample of 882 articles from 39 open access ISI-indexed journals in 2001 from biology, chemistry, physics and computing and classified the type, language, publication year and accessibility of the Google Scholar unique citing sources. The majority of Google Scholar unique citations (70%) were from full-text sources and there were large disciplinary differences between types of citing documents, suggesting that a wide range of non-ISI citing sources, especially from non-journal documents, are accessible by Google Scholar. This might be considered to be an advantage of Google Scholar, since it could be useful for citation tracking in a wider range of open access scholarly documents and to give a broader type of citation impact. An important corollary from our study is that Google Scholar’s wider coverage of Open Access (OA) web documents is likely to give a boost to the impact of OA research and the OA movement.

175 citations


Journal ArticleDOI
Henk F. Moed1
TL;DR: A longitudinal analysis of UK science covering almost 20 years revealed in the years prior to a Research Assessment Exercise (RAE 1992, 1996 and 2001) three distinct bibliometric patterns, that can be interpreted in terms of scientists’ responses to the principal evaluation criteria applied in a RAE.
Abstract: A longitudinal analysis of UK science covering almost 20 years revealed in the years prior to a Research Assessment Exercise (RAE 1992, 1996 and 2001) three distinct bibliometric patterns, that can be interpreted in terms of scientists’ responses to the principal evaluation criteria applied in a RAE. When in the RAE 1992 total publications counts were requested, UK scientists substantially increased their article production. When a shift in evaluation criteria in the RAE 1996 was announced from ‘quantity’ to ‘quality’, UK authors gradually increased their number of papers in journals with a relatively high citation impact. And during 1997–2000, institutions raised their number of active research staff by stimulating their staff members to collaborate more intensively, or at least to co-author more intensively, although their joint paper productivity did not. This finding suggests that, along the way towards the RAE 2001, evaluated units in a sense shifted back from ‘quality’ to ‘quantity’. The analysis also observed a slight upward trend in overall UK citation impact, corroborating conclusions from an earlier study. The implications of the findings for the use of citation analysis in the RAE are briefly discussed.

169 citations


Journal ArticleDOI
TL;DR: In oncology the WoS is a genuine subset of Scopus, and tends to cover the best journals from it in terms of citation impact per paper, and Scopus with broad coverage, more similar to large disciplinary literature databases.

165 citations


Journal ArticleDOI
TL;DR: Multivariate analyses demonstrated several strong predictors of impact, including first author eminence, having a more senior later author, journal prestige, article length, and number and recency of references.
Abstract: Factors contributing to citation impact in social-personality psychology were examined in a bibliometric study of articles published in the field’s three major journals. Impact was operationalized as citations accrued over 10 years by 308 articles published in 1996, and predictors were assessed using multiple databases and trained coders. Predictors included author characteristics (i.e., number, gender, nationality, eminence), institutional factors (i.e., university prestige, journal prestige, grant support), features of article organization (i.e., title characteristics, number of studies, figures and tables, number and recency of references), and research approach (i.e., topic area, methodology). Multivariate analyses demonstrated several strong predictors of impact, including first author eminence, having a more senior later author, journal prestige, article length, and number and recency of references. Many other variables — e.g., author gender and nationality, collaboration, university prestige, grant support, title catchiness, number of studies, experimental vs. correlational methodology, topic area — did not predict impact.

160 citations


Journal ArticleDOI
TL;DR: The authors show, using the mirror of science and technology indicators, that the triad model does no longer hold in the 21st century.
Abstract: The US-EU race for world leadership in science and technology has become the favourite subject of recent studies. Studies issued by the European Commission reported the increase of the European share in the world’s scientific production and announced world leadership of the EU in scientific output at the end of the last century. In order to be able to monitor those types of global changes, the present study is based on the 15-year period 1991–2005. A set of bibliometric and technometric indicators is used to analyse activity and impact patterns in science and technology output. This set comprises publication output indicators such as (1) the share in the world total, (2) subject-based publication profiles, (3) citation-based indicators like journal-and subject-normalised mean citation rates, (4) international co-publications and their impact as well as (5) patent indicators and publication-patent citation links (both directions). The evolution of national bibliometric profiles, ‘scientific weight’ and science-technology linkage patterns are discussed as well. The authors show, using the mirror of science and technology indicators, that the triad model does no longer hold in the 21st century. China is challenging the leading sciento-economic powers and the time is approaching when this country will represent the world’s second largest potential in science and technology. China and other emerging scientific nations like South Korea, Taiwan, Brazil and Turkey are already changing the balance of power as measured by scientific production, as they are at least in part responsible for the relative decline of the former triad.

119 citations


Journal ArticleDOI
23 Jul 2008-PLOS ONE
TL;DR: Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship and networking in scientific appraisals to offer incentives for more accountable co-Authorship behaviour in published articles.
Abstract: Appraisal of the scientific impact of researchers, teams and institutions with productivity and citation metrics has major repercussions. Funding and promotion of individuals and survival of teams and institutions depend on publications and citations. In this competitive environment, the number of authors per paper is increasing and apparently some co-authors don't satisfy authorship criteria. Listing of individual contributions is still sporadic and also open to manipulation. Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship. Here, I define I(1) for a single scientist as the number of authors who appear in at least I(1) papers of the specific scientist. For a group of scientists or institution, I(n) is defined as the number of authors who appear in at least I(n) papers that bear the affiliation of the group or institution. I(1) depends on the number of papers authored N(p). The power exponent R of the relationship between I(1) and N(p) categorizes scientists as solitary (R>2.5), nuclear (R = 2.25-2.5), networked (R = 2-2.25), extensively networked (R = 1.75-2) or collaborators (R<1.75). R may be used to adjust for co-authorship networking the citation impact of a scientist. I(n) similarly provides a simple measure of the effective networking size to adjust the citation impact of groups or institutions. Empirical data are provided for single scientists and institutions for the proposed metrics. Cautious adoption of adjustments for co-authorship and networking in scientific appraisals may offer incentives for more accountable co-authorship behaviour in published articles.

69 citations


Journal ArticleDOI
TL;DR: In this paper some new fields of application of Hirsch-related statistics are presented and so far unrevealed properties of the h-index are analysed in the context of rank-frequency and extreme-value statistics.
Abstract: In this paper some new fields of application of Hirsch-related statistics are presented. Furthermore, so far unrevealed properties of the h-index are analysed in the context of rank-frequency and extreme-value statistics.

55 citations


Journal ArticleDOI
TL;DR: It is suggested that the emergence of Asian countries in the field Oncology has displaced European articles more strongly than articles from the USA, and that universities from Germany, and--to a lesser extent--those from Italy, the Netherlands, UK, and Sweden, dominate a ranking of European universities based on number of articles in oncology.

47 citations


Journal ArticleDOI
TL;DR: Here, the citation performance of UK research units for each of three levels of article-aggregation is calculated and the correlation between average normalised citation impact and peerreviewed grade does indeed vary according to the selected level of zoom.
Abstract: Bibliometric indicators are widely used to compare performance between units operating in different fields of science. For cross-field comparisons, article citation rates have to be normalised to baseline values because citation practices vary between fields, in respect of timing and volume. Baseline citation values vary according to the level at which articles are aggregated (journal, sub-field, field). Consequently, the normalised citation performance of each research unit will depend on the level of aggregation, or ‘zoom’, that was used when the baselines were calculated. Here, we calculate the citation performance of UK research units for each of three levels of article-aggregation. We then compare this with the grade awarded to that unit by external peer review. We find that the correlation between average normalised citation impact and peerreviewed grade does indeed vary according to the selected level of zoom. The possibility that the level of ‘zoom’ will affect our assessment of relative...

Journal ArticleDOI
TL;DR: There is quite a uniform way about methodology of citation counts and substantial research about motivation for URL citations to LIS articles, according to a literature review of the main research about citation impact of Open Access journals.
Abstract: Purpose – This literature review aims to provide a synthesis of available key information about the citation impact of Open Access journals in LIS and science in general. Citation impact is defined as a surrogate measure of citation counts.Design/methodology/approach – Based on a literature review, this paper discusses the methodology of the data collections for citation counts. The literature review is structured to address the literature about citation impact of Open Access journals.Findings – The literature review indicates that there is quite a uniform way about methodology of citation counts and substantial research about motivation for URL citations to LIS articles.Originality/value – This literature review is a comprehensive study of the main research about citation impact of Open Access journals, focused on LIS journals.

Journal ArticleDOI
TL;DR: This paper examines policy-relevant effects of a yearly public ranking of individual researchers and their institutes in economics by means of their publication output in international top journals andLimitations of ranking studies and of bibliometric monitoring in the field of economics are discussed.
Abstract: This paper examines policy-relevant effects of a yearly public ranking of individual researchers and their institutes in economics by means of their publication output in international top journals. In 1980, a grassroots ranking (‘Top 40’) of researchers in the Netherlands by means of their publications in international top journals started a competition among economists. The objective was to improve economics research in the Netherlands to an internationally competitive level. The ranking lists did stimulate output in prestigious international journals. Netherlands universities tended to perform well compared to universities elsewhere in the EU concerning volume of output in ISI source journals, but their citation impact was average. Limitations of ranking studies and of bibliometric monitoring in the field of economics are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors examined publication, manuscript review, and citation statistics for journals that cover topics in economics and finance and found that lower acceptance rates are associated with higher citation count, citation impact factors, and survey-based rankings of journals.
Abstract: This article examines publication, manuscript review, and citation statistics for journals that cover topics in economics and finance. The authors found that lower acceptance rates are associated with higher citation count, citation impact factors, and survey-based rankings of journals. However, rankings for any given journal may be substantially different depending on the ranking method applied. Acceptance rate information is available for more journals than are covered by citation rate data or survey-based rankings. So the authors also examine the association of acceptance rates with other factors that are plausibly associated with journal quality to determine whether acceptance rates for the broader population of journals reflect quality. The authors found that lower acceptance rates are significantly associated with type of readership, higher circulation, lower rates of invited papers, and availability of reviewers’ comments.

Journal ArticleDOI
TL;DR: In this article, the first comparison of citation counts and mentoring impact (MPACT) indicators is provided, which serve to quantify the process of doctoral mentoring, with emphasis on differences between faculty ranks.

Journal ArticleDOI
TL;DR: This article used citation data drawn from ten heterodox and ten mainstream journals to identify and build on these gaps and pointed out that Woolley's article omits several important heterodox economic journals in her study, and offered a more critical evaluation of mainstream journals and economists relative to Feminist Economics and feminist economists.
Abstract: This essay is a comment on“The Citation Impact of Feminist Economics”by Frances Woolley, which appeared in Feminist Economics, Vol. 11, No. 3, November 2005. This contribution comments on Frances Woolley's recent Feminist Economics article, “The Citation Impact of Feminist Economics.” It points to two avenues through which Woolley's article could have better illuminated the extent of Feminist Economics' scholarly relationship with the communities of both heterodox and mainstream economists: first, she omits several important heterodox economic journals in her study, and second, she could have offered a more critical evaluation of mainstream journals and economists relative to Feminist Economics and feminist economists. This paper uses citation data drawn from ten heterodox and ten mainstream journals to identify and build on these gaps.

Journal Article
TL;DR: In this article, Goldstone and Leydesdorff discussed the import and export of the journal Cognitive Science in terms of aggregated journal-to-journal citations, and the main conclusion of the analysis was that the journal functions as an important intermediary between different disciplinary groups of journals that would be less directly connected if Cognitive Science did not exist.

Journal ArticleDOI
TL;DR: It is shown that a set of properly normalized indicators can serve as a basis of comparative assessment within and even among different clusters, provided that their profiles still overlap and such comparison is thus meaningful.
Abstract: A common problem in comparative bibliometric studies at the meso and micro level is the differentiation and specialization of research profiles of the objects of analysis at lower levels of aggregation. In this study, institutional profile clusters are used to examine which level of the hierarchical subject classification should preferably be used to build subject-normalized citation indicators. It is shown that a set of properly normalized indicators can serve as a basis of comparative assessment within and even among different clusters, provided that their profiles still overlap and such comparison is thus meaningful. Using the example of 24 European universities, a new version of relational charts is presented for the comparative assessment of citation impact.

Journal ArticleDOI
TL;DR: There is convincing evidence that it makes little difference what people think about the quality of the science that journals are publishing, and data showing conclusively that the IF is strongly influenced by a small minority of papers is included.
Abstract: When in 1955 Eugene Garfield conceived an index to measure scientific journals’ quality and guide decisions regarding the journals that would be covered by the Journal Citation Index, he did not consider its possible misuse [2]. “At the beginning it did not occur to me that impact would one day become the subject of widespread controversy. It has been misused in many situations, especially in the evaluation of individual researchers. The term “impact factor” (IF) has gradually evolved, especially in Europe, to mean both journal and author impact. This ambiguity often causes problems. The use of journal IF’s instead of actual article citation counts for evaluating authors is probably the most controversial issue.” [5]. Later, when Garfield realized the danger resulting from misunderstanding the IF, he used every occasion to warn against the misuses of the index, for example in the following text: “Journal impact data have been grafted on to certain large scale studies of university departments and even individuals. Sometimes a journal’s impact is used as a substitute for the evaluation of recently published articles simply because it takes several years for the average article to be cited. However, a small percentage of articles will experience almost immediate and high citation. Using the journal’s average citation impact instead of the actual article impact is tantamount to grading by the prestige of the journal involved. While expedient, it is dangerous. Although journal assessments are important, evaluation of faculty is a much more important exercise that affects individual careers. Impact numbers should not be used as surrogates except in unusual circumstances.” [3]. It was noted quite long ago that there is only a weak and casual correlation between the “citedness” of individual articles and the value of a journal’s IF [10–12]. Many people naively expect that every journal may be characterized by its distribution of citation numbers, which is quite narrow and proportional to the journal’s IF, so publication of an article in a journal of high IF automatically guarantees that it receives a large number of citations (see Fig.1). Hence the false assumption that a journal’s IF may be attributed to all articles within the journal and that it is a number useful in the evaluation of the individual authors. The real distribution of citations is, however, broad and very skewed. The number of citations in any journal, regardless of its IF value, exhibits an exponentially decreasing “background” and a “tail” of papers which have been cited many more times. It is a plausible hypothesis that the progress of science is due mainly (perhaps only?) to papers contributing to this “tail”. An example of real data is shown in Fig. 2, adapted from Redner [9]. Growing criticism of the use of journals’ IFs has been expressed by many authors [4, 6, 7, 8]. More recently, even the editors of journals have joined this criticism. An Editorial in Nature of June 23, 2005, included data showing conclusively that the IF is strongly influenced by a small minority of papers, and concluded that “Impact factors don’t tell us as much as some people think about the quality of the science that journals are publishing.” [1]. Thus there is convincing evidence that it makes little

Journal ArticleDOI
24 Oct 2008
TL;DR: This is the first randomized controlled study of Open Access publishing to ascertain whether providing free access to scholarly articles leads to greater readership and increased citation impact.
Abstract: Open Access is a lightening rod for controversy in scholarly communication, attracting more opinions and rhetoric than hard data. The limited numbers of empirical studies to date have used methodologies that do not adequately control for potential biases and competing explanations. We propose the first randomized controlled study of Open Access publishing to ascertain whether providing free access to scholarly articles leads to greater readership and increased citation impact. This experiment will involve seven publishers and 36 research journals (plus an additional twelve control journals), allowing greater generalizability over subject disciplines in the sciences, social sciences and humanities.

Journal ArticleDOI
TL;DR: In this article, a response to "A Comment on the Citation Impact of Feminist Economics,” by Frederic Lee, which appears in this issue ofFeminist Economics, is presented.
Abstract: This essay is a response to “A Comment on the Citation Impact of Feminist Economics,” by Frederic Lee, which appears in this issue ofFeminist Economics. Frederic Lee's comment is a valuable addition to our understanding of the intellectual interactions between feminist economics and other schools of heterodox thought, and demonstrates how much can be learned by studying citation patterns.

Journal Article
TL;DR: In this article, the authors analyzed the rapid increase of publications and citations of publications from Turkey for the period 1997-2006 using ISI's Essential Science Indicators (ESI) using bibliometric indicators.
Abstract: In this article, the rapid increase of publications and citations of publications from Turkey is analyzed for the period 1997-2006 using ISI’s Essential Science Indicators. Specialization and citation impact according to scientific disciplines, the internationally co-authored publications are given using bibliometric indicators. The importance of bibliometric analysis for the research evaluation and science and technology indicators is also emphasized.

Journal Article
TL;DR: Open Access (OA) has given a new challenge to scholarly communication and publishing world as mentioned in this paper with the dawn of 21st century, the Open Access has given the new challenge of scholarly communication.
Abstract: With the dawn of 21st century, the Open Access (OA) has given a new challenge to scholarly communication and publishing world. Increase in journals’ prices and constant budget for the last two decades have posed a challenge for libraries to maintain their journal subscriptions up to a sufficient level to support their research and development activities. In the meantime, publication of scholarly articles in public domain through Web has provided new ways to scholarly world. Various means of OA have been discussed with supporting business models. Authors have gone through various studies which were made in the recent past to show impact of OA towards its use and citation in scholarship and research. The studies have shown that OA helps to increase the citation impact of journals and is helping to make scientific research more visible and accessible, thus affecting tremendously the scientific communication. Authors hope that OA will have a bright future.

Journal ArticleDOI
01 Jan 2008
TL;DR: Using this indicator, the dynamics of the citation impact environments of the journals Cognitive Science, Social Networks, and Nanotechnology are animated and assessed in terms of interdisciplinarity among the disciplines involved.
Abstract: Structural change—e.g., interdisciplinary development—is often an objective of government interventions in science and technology. However, the dynamic analysis of structural change in the organization of the sciences requires methodologically the integration of multivariate and time-series analysis. Recent developments in multidimensional scaling (MDS) enable us to distinguish the stress originating in each time-slice from the stress originating from the sequencing of time-slices, and thus to locally optimize the trade-offs between these two sources of variance in the animation. Furthermore, visualization programs like Pajek and Visone allow us to show not only the positions of the nodes, but also their relational attributes like betweenness centrality. Betweenness centrality in the vector space can be considered as an indicator of interdisciplinarity. Using this indicator, the dynamics of the citation impact environments of the journals Cognitive Science, Social Networks, and Nanotechnology are animated and assessed in terms of interdisciplinarity among the disciplines involved.

Journal ArticleDOI
TL;DR: The extent to which the COUNTER guidelines on usage data have met their goals for recording and reporting usage data in a consistent, credible and compatible way is looked at.
Abstract: Updated from a paper presented at the UKSG annual conference, Torquay, April 2008 This paper looks at the extent to which the COUNTER guidelines on usage data have met their goals for recording and reporting usage data in a consistent, credible and compatible way. It compares the proposed Usage Factor with the Citation Impact Factor and highlights key differences between the act of downloading and the act of citation. Drawing on the experience of the Impact Factor, it then speculates on how measuring the literature can change the literature and how this Observer Effect might impact usage patterns.

01 Jan 2008
TL;DR: A novel way to evaluate and compare the performance of individual scientists, which combines information about quantity and quality with the use of citation analysis.
Abstract: To the Editor: Success in science is closely related to the quantity and the quality (usefulness) of a person's overall body of published work (1). Publishing articles in international scientific journals is a prerequisite for gaining recognition and standing in the scientific community (2). Counting the number of articles in a bibliography testifies to a person's productivity or research output but not the quality or utility of the published work. The usefulness of a published paper is strongly associated with the number of times the work is cited in articles published by other scientists, thus providing evidence of visibility and utility (2,3). Traditionally, the volume of published papers, the cumulative number of citations, and the citation impact (citations per article) have served as kudos when academic appointments are made or when research grants are awarded (2). In the past, much attention has also been given to the impact factors of journals where articles are published, but as explained elsewhere, forensic science and legal medicine journals generally have low values (4,5). The nine scientific journals within the subject category of "Medicine, Legal" had an average impact factor of 1.19 in 2006, and the range was from 0.45 to 2.62. The Thomson Institute for Scientific Information (ISI) has pioneered the use of citation analysis starting with the Science Citation Index, which was launched in 1964 (6). This database is now widely available through university and medical school libraries and is searchable online via the Web of Knowledge. This makes it a relatively easy task to check the number of times an article is cited or to evaluate the citation record of individual scientists. Other online options for counting citations to scientific articles include Google Scholar and SCOPUS (Elsevier), although neither of these can match Web-of-Science in terms of journal coverage, citation time-span, scope, and accuracy. A novel way to evaluate and compare the performance of individual scientists, which combines information about quantity of