scispace - formally typeset
Search or ask a question

Showing papers on "Scientometrics published in 2016"


Journal ArticleDOI
14 Jan 2016-PLOS ONE
TL;DR: This study analyzes two datasets retrieved from the Web of Science with the aim of giving a scientometric description of what the concept of CS entails, and accounts for its development over time, and what strands of research that has adopted CS and an assessment of what scientific output has been achieved in CS-related projects.
Abstract: Context The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health. Objective In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms. Results Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data.

405 citations


Journal ArticleDOI
TL;DR: It is illustrated how scientific workflows and the Taverna workbench in particular can be used in bibliometrics, and the specific capabilities of Taverna that makes this software a powerful tool in this field are discussed.
Abstract: Scientific workflows organize the assembly of specialized software into an overall data flow and are particularly well suited for multi-step analyses using different types of software tools. They are also favorable in terms of reusability, as previously designed workflows could be made publicly available through the myExperiment community and then used in other workflows. We here illustrate how scientific workflows and the Taverna workbench in particular can be used in bibliometrics. We discuss the specific capabilities of Taverna that makes this software a powerful tool in this field, such as automated data import via Web services, data extraction from XML by XPaths, and statistical analysis and visualization with R. The support of the latter is particularly relevant, as it allows integration of a number of recently developed R packages specifically for bibliometrics. Examples are used to illustrate the possibilities of Taverna in the fields of bibliometrics and scientometrics.

109 citations


Posted ContentDOI
TL;DR: It is found that it is feasible to depict an accurate representation of the current state of the Bibliometrics community using data from GSC (the most influential authors, documents, journals, and publishers), and a taxonomy of all the errors that may affect the reliability of the data contained in each of these platforms is presented.
Abstract: Following in the footsteps of the model of scientific communication, which has recently gone through a metamorphosis (from the Gutenberg galaxy to the Web galaxy), a change in the model and methods of scientific evaluation is also taking place. A set of new scientific tools are now providing a variety of indicators which measure all actions and interactions among scientists in the digital space, making new aspects of scientific communication emerge. In this work we present a method for ―capturing‖ the structure of an entire scientific community (the Bibliometrics, Scientometrics, Informetrics, Webometrics, and Altmetrics community) and the main agents that are part of it (scientists, documents, and sources) through the lens of Google Scholar Citations (GSC). Additionally, we compare these author ―portraits‖ to the ones offered by other profile or social platforms currently used by academics (ResearcherID, ResearchGate, Mendeley, and Twitter), in order to test their degree of use, completeness, reliability, and the validity of the information they provide. A sample of 814 authors (researchers in Bibliometrics with a public profile created in GSC) was subsequently searched in the other platforms, collecting the main indicators computed by each of them. The data collection was carried out on September, 2015. The Spearman correlation (α= 0.05) was applied to these indicators (a total of 31), and a Principal Component Analysis was carried out in order to reveal the relationships among metrics and platforms as well as the possible existence of metric clusters. We found that it is feasible to depict an accurate representation of the current state of the Bibliometrics community using data from GSC (the most influential authors, documents, journals, and publishers). Regarding the number of authors found in each platform, GSC takes the first place (814 authors), followed at a distance by ResearchGate (543), which is currently growing at a vertiginous speed. The number of Mendeley profiles is high, although 17.1% of them are basically empty. ResearcherID is also affected by this issue (34.45% of the profiles are empty), as is Twitter (47% of the Twitter accounts have published less than 100 tweets). Only 11% of our sample (93 authors) have created a profile in all the platforms analyzed in this study. From the PCA, we found two kinds of impact on the Web: first, all metrics related to academic impact. This first group can further be divided into usage metrics (views and downloads) and citation metrics. Second, all metrics related to connectivity and popularity (followers). ResearchGate indicators, as well as Mendeley readers, present a high correlation to all the indicators from GSC, but only a moderate correlation to the indicators in ResearcherID. Twitter indicators achieve only low correlations to the rest of the indicators, the highest of these being to GSC (0.42-0.46), and to Mendeley (0.41-0.46). Lastly, we present a taxonomy of all the errors that may affect the reliability of the data contained in each of these platforms, with a special emphasis in GSC, since it has been our main source of data. These errors alert us to the danger of blindly using any of these platforms for the assessment of individuals, without verifying the veracity and exhaustiveness of the data. In addition to this working paper, we also have made available a website where all the data obtained for each author and the results of the analysis of the most cited documents can be found: Scholar Mirrors.

93 citations


Journal ArticleDOI
18 Apr 2016-PLOS ONE
TL;DR: Although the kinetics of journal self-cites is generally faster compared to foreign cites, it shows some field-specific characteristics, and particularly in information science journals, the initial increase in a share of journalSelf-citations during post-publication year 0 was completely absent.
Abstract: Bibliometric indicators increasingly affect careers, funding, and reputation of individuals, their institutions and journals themselves. In contrast to author self-citations, little is known about kinetics of journal self-citations. Here we hypothesized that they may show a generalizable pattern within particular research fields or across multiple fields. We thus analyzed self-cites to 60 journals from three research fields (multidisciplinary sciences, parasitology, and information science). We also hypothesized that the kinetics of journal self-citations and citations received from other journals of the same publisher may differ from foreign citations. We analyzed the journals published the American Association for the Advancement of Science, Nature Publishing Group, and Editura Academiei Române. We found that although the kinetics of journal self-cites is generally faster compared to foreign cites, it shows some field-specific characteristics. Particularly in information science journals, the initial increase in a share of journal self-citations during post-publication year 0 was completely absent. Self-promoting journal self-citations of top-tier journals have rather indirect but negligible direct effects on bibliometric indicators, affecting just the immediacy index and marginally increasing the impact factor itself as long as the affected journals are well established in their fields. In contrast, other forms of journal self-citations and citation stacking may severely affect the impact factor, or other citation-based indices. We identified here a network consisting of three Romanian physics journals Proceedings of the Romanian Academy, Series A, Romanian Journal of Physics, and Romanian Reports in Physics, which displayed low to moderate ratio of journal self-citations, but which multiplied recently their impact factors, and were mutually responsible for 55.9%, 64.7% and 63.3% of citations within the impact factor calculation window to the three journals, respectively. They did not receive nearly any network self-cites prior impact factor calculation window, and their network self-cites decreased sharply after the impact factor calculation window. Journal self-citations and citation stacking requires increased attention and elimination from citation indices.

87 citations


Journal ArticleDOI
TL;DR: The co-word occurrence maps drawn at different periods show the changes and stabilities in the concepts related to the field of Informetrics.
Abstract: Purpose – The purpose of this article is to investigate the use of word co-occurrence analysis method in mapping of the scientific fields with emphasis on the field of Informetrics. Design/methodology/approach – This is an applied study using scientometrics, co-word analysis and network analysis and its steps are summarised as follows: collecting the data related to the Informetrics field indexed in Web of Science (WOS) database, refining and standardising the keywords of the extracted articles from WOS and preparing a selected list of these keywords, drawing the word co-occurrence map in the Informetrics field and analysing of results. Findings – Based on the resulted maps the concepts such as information science, library, bibliometric analysis, innovation and text mining are the most widely used topics in the field of Informetrics. The co-word occurrence maps drawn at different periods show the changes and stabilities in the concepts related to the field of Informetrics. A number of topics such as “bibl...

79 citations


Journal ArticleDOI
TL;DR: It is shown empirically that the measurement of “quality” in terms of citations can be qualified: short-term citation currency at the research front can be distinguished from longer-term processes of incorporation and codification of knowledge claims into bodies of knowledge.
Abstract: We argue that citation is a composed indicator: short-term citations can be considered as currency at the research front, whereas long-term citations can contribute to the codification of knowledge claims into concept symbols. Knowledge claims at the research front are more likely to be transitory and are therefore problematic as indicators of quality. Citation impact studies focus on short-term citation, and therefore tend to measure not epistemic quality, but involvement in current discourses in which contributions are positioned by referencing. We explore this argument using three case studies: (1) citations of the journal Soziale Welt as an example of a venue that tends not to publish papers at a research front, unlike, for example, JACS; (2) Robert Merton as a concept symbol across theories of citation; and (3) the Multi-RPYS (“Multi-Referenced Publication Year Spectroscopy”) of the journals Scientometrics, Gene, and Soziale Welt. We show empirically that the measurement of “quality” in terms of citations can further be qualified: short-term citation currency at the research front can be distinguished from longer-term processes of incorporation and codification of knowledge claims into bodies of knowledge. The recently introduced Multi-RPYS can be used to distinguish between short-term and long-term impacts.

79 citations


Journal ArticleDOI
TL;DR: It is argued that while the measures used in biodiversity research have evolved over time, the interdisciplinarity indicators can be mapped to a subset of biodiversity measures from the first and second generations.
Abstract: In bibliometrics, interdisciplinarity is often measured in terms of the "diversity" of research areas in the references that an article cites. The standard indicators used are borrowed mostly from other research areas, notably from ecology (biodiversity measures) and economics (concentration measures). This paper argues that while the measures used in biodiversity research have evolved over time, the interdisciplinarity indicators used in bibliometrics can be mapped to a subset of biodiversity measures from the first and second generations. We discuss the third generation of biodiversity measures and especially the Leinster---Cobbold diversity indices (LCDiv) (Leinster and Cobbold in Ecology 93(3):477---489, 2012). We present a case study based on a previously published dataset of interdisciplinarity study in the field of bio-nano science (Rafols and Meyer in Scientometrics 82(2):263---287, 2010). We replicate the findings of this study to show that the various interdisciplinarity measures are in fact special cases of the LCDiv. The paper discusses some interesting properties of the LCDiv which make them more appealing in the study of disciplinary diversity than the standard interdisciplinary diversity indicators.

64 citations


Journal ArticleDOI
TL;DR: A bibliometric analysis based on the Science Citation Index Expanded from Web of Science was studied for research activities of the global metal-organic frameworks (MOFs) from 1995 to 2014 and Yaghi's group published the first article in the MOFs field and they contributed eight of the top ten leading articles in 2014.

63 citations


Posted Content
TL;DR: In this paper, the authors argue that short-term citations can be considered as currency at the research front, whereas longterm citations contribute to the codification of knowledge claims into concept symbols.
Abstract: We argue that citation is a composed indicator: short-term citations can be considered as currency at the research front, whereas long-term citations can contribute to the codification of knowledge claims into concept symbols. Knowledge claims at the research front are more likely to be transitory and are therefore problematic as indicators of quality. Citation impact studies focus on short-term citation, and therefore tend to measure not epistemic quality, but involvement in current discourses in which contributions are positioned by referencing. We explore this argument using three case studies: (1) citations of the journal Soziale Welt as an example of a venue that tends not to publish papers at a research front, unlike, for example, JACS; (2) Robert Merton as a concept symbol across theories of citation; and (3) the Multi-RPYS ("Multi-Referenced Publication Year Spectroscopy") of the journals Scientometrics, Gene, and Soziale Welt. We show empirically that the measurement of "quality" in terms of citations can further be qualified: short-term citation currency at the research front can be distinguished from longer-term processes of incorporation and codification of knowledge claims into bodies of knowledge. The recently introduced Multi-RPYS can be used to distinguish between short-term and long-term impacts.

57 citations


Book
01 Oct 2016
TL;DR: This book describes and evaluates a range of webicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators.
Abstract: In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard ...

56 citations


Journal ArticleDOI
TL;DR: It is found that at least one-third of Web of Science publications are actually published in the first quartile (high impact factor journals), and it is argued that Bornmann and Marx’s claim that “One can expect that 25 % of a researcher's publications have been published inThe first quartiles” is not precise.
Abstract: As an alternative metric of journal impact factor (JIF), journal impact factor quartile is increasingly adopted to compare the research impact of journals within and across different domains. We adopt both optimistic and pessimistic approaches to illustrate the JIF distributions of journals listed in the 2015 Journal Citation Reports. We find that at least one-third of Web of Science publications are actually published in the first quartile (high impact factor journals). In comparison, at most 16.5 % of publications are published in the fourth quartile (low impact factor journals). We argue that Bornmann and Marx's (Scientometrics 98(1):487---509, 2014) claim that "One can expect that 25 % of a researcher's publications have been published in the first quartile" is not precise.

Book
21 Mar 2016
TL;DR: This book is a much needed compilation by leading scholars in the field that gathers together the theories that guide the understanding of authorship, citing, and impact.
Abstract: Scientometrics have become an essential element in the practice and evaluation of science and research, including both the evaluation of individuals and national assessment exercises. This book brings together the theories that guide informetrics and scholarly communication research. It is a much needed compilation by leading scholars in the field that gathers together the theories that guide our understanding of authorship, citing, and impact.

Journal ArticleDOI
TL;DR: An overview of methods based on cited references is presented, and examples of some empirical results from studies are presented, indicating the great potential of the data source.
Abstract: Citation analyses normally investigate the number of citations of publications (e.g. by people, institutions or journals) where the information on times cited from the bibliographic databases (such as Scopus or Web of Science) is evaluated. But in recent years, a series of works have also been published which have undertaken a change of perspective and are based on the evaluation of the cited references. The cited references are the works cited in the publications which are used to calculate the times cited. Since these evaluations have led to important insights into science and into scientometric indicators, this paper presents an overview of methods based on cited references, and examples of some empirical results from studies are presented. Thus, the investigation of references allows general statements to be made on the precision of citation analyses, and offers alternatives for the normalization of citation numbers in the framework of research evaluation using citation impact. Via the analysis of references, the historical roots of research areas or the works of decisive importance in an area can be determined. References allow quantitative statements on the interdisciplinarity of research units and the overall growth of science. The use of a selection for the analysis of references from the publications of specific research areas enables the possibility of measuring citation impact target-oriented (i.e. limited to these areas). As some empirical studies have shown, the identification of publications with a high creative content seems possible via the analysis of the cited references. The possibilities presented here for cited reference analysis indicate the great potential of the data source. We assume that there are additional possibilities for its application in scientometrics.

Journal ArticleDOI
TL;DR: There was no evidence that a lack of female collaboration causes females’ lower scientific success, and female researchers engage in more scientific collaborations, which makes gender differences in scientific success much harder to rationalize.
Abstract: In modern knowledge societies, scientific research is crucial, but expensive and often publicly financed. However, with regard to scientific research success, some studies have found gender differences in favor of men. To explain this, it has been argued that female researchers collaborate less than male researchers, and the current study examines this argument scientometrically. A secondary data analysis was applied to the sample of a recent scientometric publication (Konig et al. in Scientometrics 105:1931---1952, 2015. doi:10.1007/s11192-015-1646-y). The sample comprised 4234 (45 % female) industrial---organizational psychologists with their 46,656 publications (published from 1948 to 2013) and all of their approx. 100,000 algorithmically genderized collaborators (i.e., co-authors). Findings confirmed that (a) the majority of researchers' publications resulted from collaborations, and (b) their engagement in collaborations was related to their scientific success, although not as clearly as expected (and partly even negatively). However, there was no evidence that a lack of female collaboration causes females' lower scientific success. In fact, female researchers engage in more scientific collaborations. Our findings have important implications for science and society because they make gender differences in scientific success much harder to rationalize.

Journal ArticleDOI
TL;DR: Scientometrics as a field of science covers all aforementioned issues, and scientometric analysis is obligatory for quality assessment of the scientific validity of published articles and other type of publications.
Abstract: The nature of performing a scientific research is a process that has several different components which consist of identifying the key research question(s), choices of scientific approach for the study and data collection, data analysis, and finally reporting on results. Generally, peer review is a series of procedures in the evaluation of a creative work or performance by other people, who work in the same or related field, with the aim of maintaining and improving the quality of work or performance in that field. The assessment of the achievement of every scientist, and thus indirectly determining his reputation in the scientific community of these publications, especially journals, is done through the so-called impact factor index. The impact factor predicts or estimates that how many annual citations article may receive after its publication. Evaluation of scientific productivity and assessment of the published articles of researchers and scientists can be made through the so-called H-index. The quality of published results of scientific work largely depends on knowledge sources that are used in the preparation, which means that it should be considered to serve the purpose and the very relevance of the information used. Scientometrics as a field of science covers all aforementioned issues, and scientometric analysis is obligatory for quality assessment of the scientific validity of published articles and other type of publications.

Journal ArticleDOI
TL;DR: An overview of a relatively newly provided source of altmetrics data which could possibly be used for societal impact measurements in scientometrics, and recommends that the analysis of Web of Science publications with at least one policy-related mention is repeated regularly in order to check the usefulness of the data.
Abstract: In this short communication, we provide an overview of a relatively newly provided source of altmetrics data which could possibly be used for societal impact measurements in scientometrics. Recently, Altmetric - a start-up providing publication level metrics - started to make data for publications available which have been mentioned in policy-related documents. Using data from Altmetric, we study how many papers indexed in the Web of Science (WoS) are mentioned in policy-related documents. We find that less than 0.5% of the papers published in different subject categories are mentioned at least once in policy-related documents. Based on our results, we recommend that the analysis of (WoS) publications with at least one policy-related mention is repeated regularly (annually). Mentions in policy-related documents should not be used for impact measurement until new policy-related sites are tracked.

Journal ArticleDOI
TL;DR: In this article, the authors measure and characterize the university-industry-government (UIG) relationship in the research and innovation landscape of India and present useful output and analysis, and an informative account of the UIG collaboration network at present.
Abstract: Universities, industry and government organizations all play an important role in growth and development of knowledge-based economies in the modern era. These institutions also play a signi-ficant role in knowledge creation and its deployment to the benefit of society at large. In this article, we measure and characterize the university-industry-government (UIG) relationship in the research and innovation landscape of India. Research output data for 10 years (2005-14) obtained from Web of Science have been analysed to measure collaboration among different actors of the UIG collabora-tion network. We have also measured the collaboration variations across different disciplines and identified significant UIG institutional networks. The article presents useful output and analysis, and an informative account of the UIG collaboration network at present.

Journal ArticleDOI
TL;DR: This article explored US independent classic articles published by American scientists from 1900 to 2014 using the Science Citation Index Expanded in the Web of Science (WoS) and applied a bibliometric indicator, the Y-index, to assess the contributions of the authors of these articles.
Abstract: The present study explores US independent classic articles published by American scientists from 1900 to 2014. We examined those articles that had been cited at least 1000 times since publication to the end of 2014 using the Science Citation Index Expanded in the Web of Science (WoS). We also applied a bibliometric indicator, the Y-index, to assess the contributions of the authors of these articles. The results showed that 4909 classic articles were published between 1916 and 2013, and that the most productive categories from the WoS were multidisciplinary sciences, biochemistry and molecular biology, and general and internal medicine. Science published most of these articles, and the three most productive institutions were Harvard University, Massachusetts Institute of Technology, and Stanford University. The physicist, Edward Witten was the most prolific author and an article written by the biochemist, Marion Bradford at University of Georgia in 1976 had the highest number of citations. In addition, the article by Perdew, Burke and Ernzerhof at Tulane University had the highest number of citations in 2014.

Journal ArticleDOI
TL;DR: A review of publications that explore the role of editorial boards for scholarly journals is presented in this article, where the authors focus on the assessment of research performance at the country, organizational, or research-group level, research and publication ethics; journal quality; and internationalization of a scientific discipline.
Abstract: This paper presents a review of publications that explore the role of editorial boards for scholarly journals. Such studies are rather limited as the role of editorial boards of scientific periodicals has not yet been fully gauged by information experts as a subject of scientometric research. However, the study of the publication activities, as well as the geographic, linguistic, and gender distribution of editorial-board members offers a new perspective on several issues of relevance to scientometrics. Such issues include the assessment of research performance at the country, organizational, or research-group level; research and publication ethics; journal quality; and internationalization of a scientific discipline.


Journal ArticleDOI
TL;DR: In this paper, a metaphilosophical reflection on the emerging "Philosophy of interdisciplinarity" (PhID) is provided, which has the qualities of being broad and neutral as well as stemming from within the agenda of philosophy of science.
Abstract: Compared to the massive literature from other disciplinary perspectives on interdisciplinarity (such as those from sociology, education, management, scientometrics), philosophy of science is only slowly beginning to pay systematic attention to this powerful trend in contemporary science. The paper provides some metaphilosophical reflections on the emerging “Philosophy of Interdisciplinarity” (PhID). What? I propose a conception of PhID that has the qualities of being broad and neutral as well as stemming from within the (also broadly conceived) agenda of philosophy of science. It will investigate features of science that reveal themselves when scientific disciplines are viewed in comparison or in contact with one another. PhID will therefore generate two kinds of information: comparative and contactual. Comparative information is about the similarities and differences between disciplines, while contactual information is about what happens and why when disciplines get in contact with each other. Virtually all issues and resources within the philosophy of science can be mobilized to bear on the project, including philosophical accounts of models, explanations, justification, evidence, progress, values, demarcation, incommensurability, and so on. Given that scientific disciplines are institutional entities, resources available (and forthcoming) in social epistemology and social ontology will also have to be invoked. Why? Establishing PhID is presently an obvious step to take for several reasons, including the following two. First, ID is an increasingly powerful characteristic of contemporary science and its management, and so it would be inappropriate for an empirically informed philosophy of science to ignore it. Second, contemporary philosophy of science happens to be particularly well equipped for addressing issues of ID thanks to the recent massive work in the more specialized fields of philosophies of special disciplines (of biology, of cognitive science, of economics, of engineering, etc.). How? Given the breadth and heterogeneity of its domain and tasks, the practice of PhID must be heavily collective. It must mobilize multiple competences and it must keep elaborating a systematic agenda (or perhaps several overlapping agendas in case there will be rival ‘schools’ of PhID). While a lot of new conceptual work is needed, the approach is bound to be emphatically empirical, with a cumulative and mutually complementary series of case studies to be conducted. Among the methods to be employed, good old textual analysis of scientific publications will be supplemented with interviews, ‘experimental’ techniques, participant observation as well as various interventionist approaches. The published work in PhID will often be authored jointly by philosophers and other scholars in science studies as well as practitioners in various scientific disciplines.

Journal ArticleDOI
TL;DR: Three dimensions of scholarly capital (ideational influence, connectedness and venue representation) are identified in this paper as part of a scholarly capital model (SCM) and it is shown how one might use the measures to evaluate scholarly research activity.
Abstract: Assessing the research capital that a scholar has accrued is an essential task for academic administrators, funding agencies, and promotion and tenure committees worldwide. Scholars have criticized the existing methodology of counting papers in ranked journals and made calls to replace it (Adler & Harzing, 2009; Singh, Haddad, & Chow, 2007). In its place, some have made calls to assess the uptake of a scholar’s work instead of assessing “quality” (Truex, Cuellar, Takeda, & Vidgen, 2011a). We identify three dimensions of scholarly capital (ideational influence (who uses one’s work?), connectedness (with whom does one work?) and venue representation (where does one publish their work?)) in this paper as part of a scholarly capital model (SCM). We develop measurement models for the three dimensions of scholarly capital and test the relationships in a path model. We show how one might use the measures to evaluate scholarly research activity.

Journal ArticleDOI
TL;DR: This article deals with a modern disease of academic science that consists of an enormous increase in the number of scientific publications without a corresponding advance of knowledge, an article-inflation phenomenon, a scientometric bubble that is most harmful for science and promotes an unethical and antiscientific culture among researchers.
Abstract: This article deals with a modern disease of academic science that consists of an enormous increase in the number of scientific publications without a corresponding advance of knowledge. Findings are sliced as thin as salami and submitted to different journals to produce more papers. If we consider academic papers as a kind of scientific ‘currency’ that is backed by gold bullion in the central bank of ‘true’ science, then we are witnessing an article-inflation phenomenon, a scientometric bubble that is most harmful for science and promotes an unethical and antiscientific culture among researchers. The main problem behind the scenes is that the impact factor is used as a proxy for quality. Therefore, not only for convenience, but also based on ethical principles of scientific research, we adhere to the San Francisco Declaration on Research Assessment when it emphasizes “the need to eliminate the use of journal-based metrics in funding, appointment and promotion considerations; and the need to assess research on its own merits rather on the journal in which the research is published”. Our message is mainly addressed to the funding agencies and universities that award tenures or grants and manage research programmes, especially in developing countries. The message is also addressed to well-established scientists who have the power to change things when they participate in committees for grants and jobs.

Journal ArticleDOI
TL;DR: The results of study showed that research activities in the field of CVD have become an interesting subject area of scientists during years 2001-2010 and the vast majority of scientific publication was produced from North America and Western Europe.
Abstract: Introduction: Heart disease or cardiovascular disease (CVD) is a kind of illness that involve heart and/or blood vessels of people throughout the world. The major aim of current study was to show the trend of global scientific activities in the field of CVD during a period of 10 years through 2001-2010. Methods: A scientometrics analysis was carried out to show the world wide activities towards scientific production in the field of CVD during a period of 10 years. Science Citation Index- Expanded (SCI-E) was used to extract all documents indexed as a topic of CVD throughout 2001- 2010. Results: Analysis of data showed that the number of publications in the field of cardiovascular has increased steadily. The number of publication indexed in SCI-E in 2010 was three times greater than in 2001. It reached from 5080 documents in 2001 into 15,584 documents in 2010. English consisting 95% of total publication was the most dominant language of publications. Based on Bradford scatterings law the journal of Circulation was the most prolific journal among core journals. The USA sharing 29.5% of world's profiles in the field was the most productive country Harvard University was the most productive Institution followed by Brigham Women's Hospital. Conclusion: The vast majority of scientific publication in the field of CVD was produced by authors from North America and Western Europe. The results of study concluded that research activities in the field of CVD have become an interesting subject area of scientists during years 2001-2010.

Journal ArticleDOI
TL;DR: In this paper, the authors examined how activating one's social network can contribute to the impact of academic research and what factors lead researchers to utilize their social network and found that women researchers, researchers originating from less economically advanced countries, or those working with fewer co-authors on a research project are more likely to utilize social network than their peers.

Journal ArticleDOI
TL;DR: The h- index, h5-index, the World ranking the top of 25 Highly Cited Researchers (h > 100) and the ranking of 25 scientists in Hungarian Institutions according to their Google Scholar Citations public profiles are considered.
Abstract: Indexes in scientometrics are based on citations. However, in contrast to the journal impact factor, which gives only the ranking of the scientific journals, ordered by impact factor, indexes in scientometrics are suitable for ranking of scientists, scientific journals and countries. In this paper the h-index, h5-index, the World ranking the top of 25 Highly Cited Researchers (h > 100) and the ranking of 25 scientists in Hungarian Institutions according to their Google Scholar Citations public profiles are considered. These indexes (h5-index) are applied for making of the list of top 20 publications (journals and proceedings) in the field of Robotics. The World ranking is done of the best 50 countries according to h-index in year 2014. Data are obtained from the portal Scimago.

Journal ArticleDOI
01 Dec 2016
TL;DR: Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic, and Harvard University leads the ranking if fractional counting is used.
Abstract: As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science WoS subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 http://highlycited.com/. Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: a collection and exploitation of information in clinical practices; b use of the Internet in public communication and commerce; and c scientometrics.

Journal ArticleDOI
TL;DR: The extent to which research productivity varies with the number of collaborative partners for long term researchers within three Web of Science subject areas is assessed.
Abstract: Funding bodies have tended to encourage collaborative research because it is generally more highly cited than sole author research. But higher mean citation for collaborative articles does not imply collaborative researchers are in general more research productive. This article assesses the extent to which research productivity varies with the number of collaborative partners for long term researchers within three Web of Science subject areas: Information Science & Library Science, Communication and Medical Informatics. When using the whole number counting system, researchers who worked in groups of 2 or 3 were generally the most productive, in terms of producing the most papers and citations. However, when using fractional counting, researchers who worked in groups of 1 or 2 were generally the most productive. The findings need to be interpreted cautiously, however, because authors that produce few academic articles within a field may publish in other fields or leave academia and contribute to society in other ways.

Journal ArticleDOI
TL;DR: A multidimensional ‘Quality–Quantity’ Composite Index for a group of institutions using bibliometric data, that can be used for ranking and for decision making or policy purposes at the national or regional level is proposed.
Abstract: It is now generally accepted that institutions of higher education and research, largely publicly funded, need to be subjected to some benchmarking process or performance evaluation. Currently there are several international ranking exercises that rank institutions at the global level, using a variety of performance criteria such as research publication data, citations, awards and reputation surveys etc. In these ranking exercises, the data are combined in specified ways to create an index which is then used to rank the institutions. These lists are generally limited to the top 500---1000 institutions in the world. Further, some criteria (e.g., the Nobel Prize), used in some of the ranking exercises, are not relevant for the large number of institutions that are in the medium range. In this paper we propose a multidimensional `Quality---Quantity' Composite Index for a group of institutions using bibliometric data, that can be used for ranking and for decision making or policy purposes at the national or regional level. The index is applied here to rank Central Universities in India. The ranks obtained compare well with those obtained with the h-index and partially with the size-dependent Leiden ranking and University Ranking by Academic Performance. A generalized model for the index using other variables and variable weights is proposed.

Journal ArticleDOI
James Hartley1
TL;DR: The Flesch Reading Ease measure, widely used to measure the difficulty of text in various disciplines, is now outdated, used inappropriately, and unreliable.
Abstract: The Flesch Reading Ease measure is widely used to measure the difficulty of text in various disciplines, including Scientometrics. This letter/paper argues that the measure is now outdated, used inappropriately, and unreliable.