scispace - formally typeset
Search or ask a question

Showing papers in "Scientometrics in 1996"


Journal ArticleDOI
TL;DR: The relationship between collaboration and co-authorship, the nature of bibliometric data, and exemplifies how they can be refined and used to analyse various aspects of collaboration are discussed.
Abstract: Scientific collaboration has become a major issue in science policy. The tremendous growth of collaboration among nations and research institutions witnessed during the last twenty years is a function of the internal dynamics of science as well as science policy initiatives. The need to survey and follow up the collaboration issue calls for statistical indicators sensitive enough to reveal the structure and change of collaborative networks. In this context, bibliometric analysis of co-authored scientific articles is one promising approach. This paper discusses the relationship between collaboration and co-authorship, the nature of bibliometric data, and exemplifies how they can be refined and used to analyse various aspects of collaboration.

623 citations


Journal ArticleDOI
TL;DR: Critique de l'analyse des citations: cette analyse ne reflete pas l'influence reelle d'un auteur sur un scientifique donne, y compris que les citations puissent etre utilisees comme indicateurs de qualite.
Abstract: Critique de l'analyse des citations: cette analyse ne reflete pas l'influence reelle d'un auteur sur un scientifique donne. Il est peu vraisemblable que les citations puissent etre utilisees comme indicateurs de qualite

444 citations


Journal ArticleDOI
TL;DR: An overview of the potentials and limitations of bibliometric methods for the assessment of strengths and weaknesses in research performance, and for monitoring scientific developments, and how both approaches can be combined to a broader and powerful methodology to observe scientific advancement and the role of actors.
Abstract: This paper gives an overview of the potentials and limitations of bibliometric methods for the assessment of strengths and weaknesses in research performance, and for monitoring scientific developments. We distinguish two different methods. In the first application, research performance assessment, the bibliometric method is based on advanced analysis of publication and citation data. We show that the resulting indicators are very useful, and in fact an indispensable element next to peer review in research evaluation procedures. Indicators based on advanced bibliometric methods offer much more than 'only numbers'. They provide insight into the position of actors at the research front in terms of influence and specializations, as well as into patterns of scientific communication and processes of knowledge dissemination. After a discussion of technical and methodological problems, we present practical examples of the use of research performance indicators. In the second application, monitoring scientific developments, bibliometric methods based on advanced mapping techniques are essential. We discuss these techniques briefly and indicate their most important potentials, particularly their role in foresight exercises. Finally, we give a first outline of how both bibliometric approaches can be combined to a broader and powerful methodology to observe scientific advancement and the role of actors.

370 citations


Journal ArticleDOI
TL;DR: It is argued that evaluations of basic research are best carried out using a range of indicators, and the method of converging partial indicators used in several SPRU evaluations is described.
Abstract: This paper argues that evaluations of basic research are best carried out using a range of indicators. After setting out the reasons why assessments of government-funded basic research are increasingly needed, we examine the multi-dimensional nature of basic research. This is followed by a conceptual analysis of what the different indicators of basic research actually measure. Having discussed the limitations of various indicators, we describe the method of converging partial indicators used in several SPRU evaluations. Yet although most of those who now use science indicators would agree that a combination of indicators is desirable, analysis of a sample ofScientometrics articles suggests that in practice many continue to use just one or two indicators. The paper also reports the results of a survey of academic researchers. They, too, are strongly in favour of research evaluations being based on multiple indicators combined with peer review. The paper ends with a discussion as to why multiple indicators are not used more frequently.

323 citations


Journal ArticleDOI
TL;DR: Applications of these three bibliometric types will be described within the framework of Weinberg's internal and external criteria, whether the work being done is good science, efficiently and effectively done, and whether it is important science from a technological viewpoint.
Abstract: Three different types of bibliometrics — literature bibliometrics, patent bibliometrics, and linkage bibliometric can all be used to address various government performance and results questions. Applications of these three bibliometric types will be described within the framework of Weinberg's internal and external criteria, whether the work being done is good science, efficiently and effectively done, and whether it is important science from a technological viewpoint. Within all bibliometrics the fundamental assumption is that the frequency with which a set of papers or patents is cited is a measure of the impact or influence of the set of papers. The literature bibliometric indicators are counts of publications and citations received in the scientific literature and various derived indicators including such phenomena as cross-sectoral citation, coauthorship and concentration within influential journals. One basic observation of literature bibliometrics, which carries over to patent bibliometrics, is that of highly skewed distributions — with a relatively small number of high-impact patents and papers, and large numbers of patents and papers of minimal impact. The key measure is whether an agency is producing or supporting highly cited papers and patents. The final set of data are in the area of linkage bibliometrics, looking at citations from patents to scientific papers. These are particularly relevant to the external criteria, in that it is quite obvious that institutions and supporting agencies whose papers are highly cited in patents are making measurable contributions to a nation's technological progress.

201 citations


Journal ArticleDOI
TL;DR: The set of core documents is analysed by journals, subfields and corporate addresses, and all countries which have published at least 20 core documents in 1992 are investigated in terms of their research profiles, their international collaboration patterns and their citation impact.
Abstract: In an earlier study the authors have shown that bibliographic coupling techniques can be used to identify hot research topics The methodology is based on appropriate thresholds for both number of related documentsand the strength of bibliographic links Those papers are calledcore documents that have more than 9 links of at least the strength 025 according toSalton's measure, provided they are articles, notes or reviews This choice resulted in a selection of nearly one per cent of all papers of the above types recorded in the 1992 annual cumulation of the SCICore documents proved important nodes in the network of documented science communicationIn the present study, the set ofcore documents is analysed by journals, subfields and corporate addresses The latter analysis is conducted on both national and regional-institutional level First all countries which have published at least 20 core documents in 1992 are investigated in terms of their research profiles, their international collaborati

182 citations


Journal ArticleDOI
TL;DR: A positive correlation was found between productivity and international and domestic collaboration at the author level and international collaboration was linked to higher visibility documents.
Abstract: Collaboration practices and partners vary greatly per scientific area and discipline and influence the scientific performance. Bibliometric indicators are used to analyse international, domestic and local collaboration in publications of Spanish authors in three Biomedical subfields: Neurosciences, Gastroenterology and Cardiovascular System as covered by theSCI database. Team size, visibility and basic-applied level of research were analysed according to collaboration scope. International collaboration was linked to higher visibility documents. Cluster analysis of the most productive authors and centres provides a description, of collaboration habits and actors in the three subfields. A positive correlation was found between productivity and international and domestic collaboration at the author level.

170 citations


Journal ArticleDOI
TL;DR: Methods of selecting reference standards and scaling procedures are surveyed in this study, and examples are given to their practical application.
Abstract: Comparative assessment of scientometric indicators is greatly hindered by the different standards valid in different science fields and subfields. Indicators concerning to different fields can be compared only after first gauging them against a properly chosen reference standard, and their relative standing can then be compared. Methods of selecting reference standards and scaling procedures are surveyed in this study, and examples are given to their practical application.

159 citations



Journal ArticleDOI
TL;DR: The need for standardisation in bibliometric research and technology is discussed in the context of failing communication within the scientific community, the unsatisfactory impact of bibliometrics research outside the community and the observed incompatibility ofbibliometric indicators produced by different institutes.
Abstract: The need for standardisation in bibliometric research and technology is discussed in the context of failing communication within the scientific community, the unsatisfactory impact of bibliometric research outside the community and the observed incompatibility of bibliometric indicators produced by different institutes. The development of bibliometric standards is necessary to improve the reliability of bibliometric results, to guarantee the validity of bibliometric methods and to make bibliometric data compatible. Both conceptual and technical questions are raised. Consequences of lacking standards are illustrated by typical examples. Finally, particular topics of standardisation are proposed based on experiences made at ISSRU.

102 citations


Journal ArticleDOI
Henk F. Moed1
TL;DR: The observations made in this paper illustrate the complexity of the process of ‘standardisation’ of bibliometric indicators, and provide possible explanations for divergence of results obtained in different studies.
Abstract: This contribution discusses basic technical-methodological issues with respect to data collection and the construction of bibliometric indicators, particularly at the macro or meso level. It focusses on the use of the Science Citation Index. Its aim is to highlight important decisions that have to be made in the process of data collection and the construction of bibliometric indicators. It illustrates differences in the methodologies applied by several important producers of bibliometric indicators: the Institute for Scientific Information (ISI); CHI Research, Inc.; the Information Science and Scientometrics Research Unit (ISSRU) at Budapest; and the Centre for Science and Technology Studies at Leiden University (CWTS). The observations made in this paper illustrate the complexity of the process of ‘standardisation’ of bibliometric indicators. Moreover, they provide possible explanations for divergence of results obtained in different studies. The paper concludes with a few general comments related to the need of ‘standardisation’ in the field of bibliometrics.

Journal ArticleDOI
TL;DR: It is shown that the Journal Impact Factor as published by ISI — an indicator increasingly used as an measure for the quality of scientific journals — is misleading when two leading journals in chemistry, Angew.
Abstract: It is shown that the Journal Impact Factor as published by ISI — an indicator increasingly used as an measure for the quality of scientific journals — is misleading when two leading journals in chemistry,Angew. Chem., andJ. Am. Chem. Soc., are compared. A detailed analysis of the various kinds publications in both journals over the period 1982–1994 shows that the overall impact factors based on publications and citations in two consecutive years forJACS communications (5.27 for 1993) are significantly higher than those ofAngew. Chem. (3.26 for 1993). Even when all types of articles, i.e. including reviews, are included in the impact factors,JACS has a higher score thanAngew. Chem. (5.07 vs. 4.03 in 1993). Critical and accurate analyses of citation figures is required when such data are used in science policy decisions, such as library subscriptions. It is proposed that when IF values for several journals are compared, only similar publication types are considered.

Journal ArticleDOI
TL;DR: It is argued that, if certain preconditions are met, the choice of citation rate is not critical, and the citation record of research publications appearing in journals indexed by the Institute for Scientific Information (ISI) is a useable surrogate for the citationrecord within ISI journals of all model of publication.
Abstract: In order to resolve questions frequently raised in the context of research evaluation about the citation rates of journal publications in relation to other types of publications, the total research output of substantial institutions or systems has to be brought under bibliographic control. That precondition has rarely been met: there are few published studies of the total range of publications of major research institutions, including books, book chapters, technical reports and published conference proceedings. The Research Evaluation and Policy Project (REPP) at the Australian National University (ANU) has established a database covering all the publications from the Institute of Advanced Studies (IAS), a fulltime research institution at the ANU, and has examined in detail citations in the journal literature accruing to all types of publications. The database contains a significant number of publications, nearly 30 000 items, and covers the sciences and the social sciences and humanities. This data enables us to examine whether the citation record of research publications appearing in journals indexed by the Institute for Scientific Information (ISI) is a useable surrogate for the citation record within ISI journals of all model of publication. We contend that, if certain preconditions are met, the choice of citation rate is not critical.

Journal ArticleDOI
TL;DR: This paper re-examines 205Citation Classics commentaries from the 400 most-cited papers in the recent history of science and introduces a new approach to the study of serendipity in scientific discovery.
Abstract: The main sociological, philosophical and historical approaches only ascribe a relative importance to the role of chance, error, or accident in scientific progress. The literature on this topic tends to be anecdotal, sometimes hagiographic and rarely systematic. The main goal of this paper is to introduce a new approach to the study of serendipity in scientific discovery. This new approach is based in the study of highly cited papers obtained from theCitation Classics feature ofCurrent Contents. This paper re-examines 205Citation Classics commentaries from the 400 most-cited papers in the recent history of science. Authors of 17Citation Classics commentaries (8.3%) mention some kind of serendipity in performing the research reported in the highly cited paper. Commentaries are classified and discussed in detail. In addition, I have examinated the original papers identified above. In 5 from the original highly cited papers authors explained or gave enough hints on the way the serendipitous discovery was done.

Journal ArticleDOI
TL;DR: The standard impact factor for particular fields of science (Ig) and the relative impact factor K for scientific journals are introduced and the results are discussed.
Abstract: The standard impact factor for particular fields of science (Ig) and the relative impact factor K for scientific journals are introduced. The technique of calculation of standard impact factor (Ig) for a field is an inherent part of a method which allows a cross-field evaluation of scientific journals. This method for evaluating scientific journals elaborated in 1988 was aimed at the analysis of Russian journals covered by the SCI database, it was also used for chemical journals (more that 300) and for journals in the Life sciences (more than 1000). The results are discussed.

Journal ArticleDOI
TL;DR: By including the eminent scientists' gatekeeping roles, the explanation of their total, co-authored and foreign publications can be improved and the most important predictors of the elite's productivity are also qualificational and organizational variables but of a more selective nature.
Abstract: The empirical research on the sample of 385 eminent Croatian scientists was carried out in order to explore the patterns and factors of their scientific productivity. The study design made it possible to compare the results with those obtained in the 1990 survey on a sample of the research population. The average scientific productivity of eminent researchers is not only several times larger but also shows a more intensive scientific collaboration and orientation towards the international scientific arena. The most important predictors of the elite's productivity are also qualificational and organizational variables but of a more selective nature. By including the eminent scientists' gatekeeping roles, the explanation of their total, co-authored and foreign publications can be improved.

Journal ArticleDOI
J. C. Korevaar1
TL;DR: In conclusion, the experts' views on top publications or top journals correspond very well to bibliometric indicators based on citation counts.
Abstract: Bibliometric analyses of scientific publications provide quantitative information that enables evaluators to obtain a useful picture of a team's research visibility. In combination with peer judgements and other qualitative background knowledge, these analyses can serve as a basis for discussions about research performance quality. However, many mathematicians are not convinced that citation counts do in fact provide useful information in the field of mathematics. According to these mathematicians, citation and publication habits differ completely from scholarly fields such as chemistry or physics. Therefore, it is impossible to derive valid information regarding research performance from citation counts. The aim of this study is to obtain more insight into the significance of citation-based indicators in the field of mathematics. To which extent do citation-scores mirror to the opinions of experts concerning the quality of a paper or a journal? A survey was conducted to answer this question.Top journals, as qualified by experts, receive significantly higher citation rates thangood journals. Thesegood journals, in turn, have significantly higher scores than journals with the qualificationless good. Top publications, recorded in the ISI database, receive on the average 15 times more citations than the mean score within the field of mathematics as a whole. In conclusion, the experts' views on top publications or top journals correspond very well to bibliometric indicators based on citation counts.

Journal ArticleDOI
TL;DR: The Brazilian contribution to publications in science and humanities increased from 0.29% of the worldwide total in 1981 to 0.46% in 1993, and in science, but not in humanities, Brazilian publications tend to follow the world publication trend.
Abstract: The Brazilian contribution to publications in science and humanities increased from 0.29% of the worldwide total in 1981 to 0.46% in 1993. In science, but not in humanities, Brazilian publications tend to follow the world publication trend; thus, during the period 1981–1993, 57.9% of Brazilian publications were in life sciences, 35.4% in exact sciences, 3.9% in earth sciences and 2.9% in humanities. The ten institutions with the largest number of publications are universities, which account for half of the all Brazilian publications. The total number of authors on the Brazilian 1981–1993 publications was 52,808. Among these 57.8% appear in only one publication and 17.5% have their publications cited more than 10 times.

Journal ArticleDOI
TL;DR: It is shown that international collaboration was not developing similarly in the countries under study, and since 1990 an increasing scientific collaboration with highly developed countries can be observed in all five countries.
Abstract: International scientific collaboration is very sensitive to political and economic changes in a country or a geopolitical region. Collaboration in research is reflected by the corresponding coauthorship of the published results which can be analysed with the help of bibliometric methods. Based on data from theScience Citation Index (SCI), the change of annual international coauthorship patterns ofBulgaria, Czechoslovakia, Hungary, Poland andRomania have been analysed for the periods 1981–1985 and 1984–1993, respectively. It is shown that international collaboration was not developing similarly in the countries under study. Whilst scientific communities of Hungary and Poland have already been opening in the early 80s, the international collaboration of the other East-European countries was still dominated by COMECON relations till 1989. As expected, since 1990 an increasing scientific collaboration with highly developed countries can be observed in all five countries. At the same time, scientific collaboration with the former communist countries shows a clear decline. The great share of international co-authorship links is some countries reflect various tendencies part of which are interpreted with the help of a cardiologic model.

Journal ArticleDOI
TL;DR: Standardization of subject classifications emerges as an important point in bibliometric studies in order to allow international comparisons, although flexibility is needed to meet the needs of local studies.
Abstract: The delimitation of a research field in bibliometric studies presents the problem of the diversity of subject classifications used in the sources of input and output data. Classification of documents according to thematic codes or keywords is the most accurate method, mainly used in specialised bibliographic or patent databases. Classification of journals in disciplines presents lower specificity, and some shortcomings as the change over time of both journals and disciplines and the increasing interdisciplinarity of research. Differences in the criteria in which input and output data classifications are based obliges to aggregate data in order to match them. Standardization of subject classifications emerges as an important point in bibliometric studies in order to allow international comparisons, although flexibility is needed to meet the needs of local studies.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed six indicators of the SCI Journal Citation Reports (JCR) over a 19-year period: number of total citations, number of citations to the two previous years, the number of source items, impact factor, immediacy index and cited half-life.
Abstract: In this paper, we analysed six indicators of the SCI Journal Citation Reports (JCR) over a 19-year period: number of total citations, number of citations to the two previous years, number of source items, impact factor, immediacy index and cited half-life. The JCR seems to have become more or less an authority for evaluating scientific and technical journals, essentially through its impact factor. However it is difficult to find one's way about in the impressive mass of quantitative data that JCR provides each year. We proposed the box plot method to aggregate the values of each indicator so as to obtain, at a glance, portrayals of the JCR population from 1974 to 1993. These images reflected the distribution of the journals into 4 groups designated low, central, high and extreme. The limits of the groups became a reference system with which, for example, it was rapidly possible to situate visually a given journal within the overall JCR population. Moreover, the box plot method, which gives a zoom effect, made it possible to visualize a large sub-population of the JCR usually overshadowed by the journals at the top of the rankings. These top level journals implicitly play the role of reference in evaluation processes. This often incites categorical judgements when the journals to be evaluated are not part of the top level. Our «rereading» of the JCR, which presented the JCR product differently, made it possible to qualify these judgements and bring a new light on journals.

Journal ArticleDOI
TL;DR: A set of scientometric indicators of interdisciplinary links between advancing fields of biomedicine is suggested and the combined usage of this method with co-classification and co-citation methodology can optimize interdisciplinarity evaluation and promotion.
Abstract: A set of scientometric indicators of interdisciplinary links between advancing fields of biomedicine is suggested. Twenty jounals listed in theJCR of theSCI for 1988 are analyzed. An index of interdisciplinarity for a given journal is calculated as the sum of ratios between the numbers of journals from all other disciplines (except for general-scientific and miscellaneous journals) and from the same discipline cited by that journal or citing it, and of ratios between the numbers of citations to and by these journals. Some interdisciplinary patterns of 20 andrology journal articles are scientometrically assessed, too. The combined usage of this method with co-classification and co-citation methodology can optimize interdisciplinarity evaluation and promotion.

Journal ArticleDOI
TL;DR: These investigations indicate that at the level of micro/meso studies high recall rates can be achieved by the use of appropriate clustering techniques limiting singletons and the enrichment of cocited cores by medium-cited items, providing that careful trade-offs are sought between the extension and relevance of recall.
Abstract: Although co-citation techniques are very powerful structuring tools, the use of science policy indicators based on co-citation has often been criticized, especially on ISI research fronts. A major issue is the small fraction of literature retrieved, i.e. the “recall rate” problem. Our investigations indicate that at the level of micro/meso studies high recall rates can be achieved by (a) the use of appropriate clustering techniques limiting singletons and (b) the enrichment of cocited cores by medium-cited items. This combination of appropriate clustering and extension of recall proves to be efficient, provided that careful trade-offs are sought between the extension and relevance of recall. It leads to a reassessment of the performance of the co-citation approach for structuring scientific fields and providing related indicators not limited to the ‘leading edge’. It also opens new opportunities for comparison/combination with other relational methods such as co-word analysis.

Journal ArticleDOI
TL;DR: This paper shows how the refocusing of basic research is strongly linked to increasing competitive corporate pressures and increasing pressures from other government programs on the limited government budgets.
Abstract: This paper overviews the growth of basic research in the second half of the twentieth century, and its gradual recent transformation toward applied research. The paper shows how the refocusing of basic research is strongly linked to increasing competitive corporate pressures and increasing pressures from other government programs on the limited government budgets. With these increasing competitive pressures comes the attendant increased call for accountability, and the substitution of strategic goals for unfettered opportunity-driven research. With modem day computer technology, the accountability has a correspondingly stronger quantitative component, even though the fundamental difficulties of identifying and quantifying the full range of research impacts remain unresolved. This paper, and this special issue, examine the diverse quantitative measures under consideration for research evaluation and impact assessment, and identify the strengths and weaknesses of the various classes of measures. Each paper in this special issue is overviewed in the present introductory paper. The very important qualitative and retrospective methods used to support research accountability 1-3 are not included in this paper, and are not main themes of this focused special issue.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the online citation analysis possibilities and limitations and used the following online processing tools: RANK, MAP and TARGET, provided by Dialog, in order to perform analyses of citations to and from isolated sets of documents as well as to carry out diachrone journal analyses.
Abstract: The paper investigates the online citation analysis possibilities and limitations. The following online processing tools: RANK, MAP, and TARGET, provided by Dialog, are incorporated in order to perform analyses of citations to and from isolated sets of documents as well as to carry out diachrone journal analyses. These, analyses imply further to determine journal impact factors of ISI journals. Measures of the scope of internationalisation of journals are proposed and demonstrated. By the combined application of the RANK and TARGET commands we demonstrate a hitherto overlooked possibility of working with bibliographic coupling online and mapping of scientific fields.

Journal ArticleDOI
TL;DR: In this article, a questionnaire was constructed to identify and assess the effects of various factors important for the productivity of the individual researcher as reflected in the number of papers and Ph.D.'s produced.
Abstract: This study explored the main factors influencing the research production in the arts and humanities. A questionnaire was constructed to identify and assess the effects of various factors important for the productivity of the individual researcher as reflected in the number of papers and Ph.D.'s produced. First, respondents were given the opportunity to list in their own words a number of important factors influencing research productivity. Secondly, they evaluated on rating scales the importance of a number of pre-selected factors (e.g. individual characteristics, organisational features, external factors) assumed to be important for research productivity. 50% of a sample of 256 researchers in the humanities responded. Ratings were grouped to produce a number of indices and these were subject to multiple regression analyses. The main results showed that the production of papers was predicted by the number of Ph.D.'s produced and inversely related to the importance of organisational factors. The production of Ph.D.'s was dependent on the year of the Ph.D. and the position of the respondent as well as on the number of papers s/he produced. A number of conclusions were drawn: a) there was support for the academic social position effect also in the humanities; b) organisational factors apparently played a minor role in comparison to individual characteristics in the humanities than in the sciences and; c) the differences in productivity of papers were also related to gender, but not to size, area or language of publications. Implications for further studies were suggested.

Journal ArticleDOI
TL;DR: In this note, it is shown that the slope of the regression line of the impact as a function of the number of publications is positive if and only if the global impact is larger than the average impact of all journals.
Abstract: In this note we clarify some notions concerning citations, publications, and their quotients: impact and indifference (a measure of invisibility, introduced in this article). In particular, we show that the slope of the regression line of the impact as a function of the number of publications is positive if and only if the global impact, i.e. the impact of the set of all journals under consideration, is larger than the average impact of all journals.

Journal ArticleDOI
TL;DR: A change in the publication patterns of the Middle Eastern physicists is found in the direction of decreasing isolation and increasing collaboration.
Abstract: I studied the publication efforts in physics in Egypt, Iran, Iraq, Jordan, Saudi Arabia, Syria, and Turkey in terms of a total number of 2368 papers from these countries in international journals for 1990–1994. I looked for the national contributions, main subjects of activity, journal preferences of authors, and co-authorship patterns. Comparisons show that physicists from Egypt and Turkey combined, produced 75% of the total publication output. Half of the Egyptian papers went only to 16% of a set of 115 journals that publish papers from this country. Such a high concentration of papers in a few journals was not the case for the rest of the countries. Condensed matter physics was found to be among the three most active subjects for the countries except Iran. Iranian authors tended to be more active in astrosciences, and nuclear science and technology. I found a change in the publication patterns of the Middle Eastern physicists in the direction of decreasing isolation and increasing collaboration.

Journal ArticleDOI
TL;DR: The Brazilian scientific production and its international impact increased considerably in the last 10 years, in spite of a reduction in the resources for science in the same period.
Abstract: The Brazilian scientific production and its international impact increased considerably in the last 10 years. This increase occurred in spite of a reduction in the resources for science in the same period. The data show that the explanation for this apparent paradox lies in the active process of international and national collaboration which increased in this same period. Collaborative work was supported by several programs of the Brazilian agencies. Advantages and possible drawbacks of the intensification of scientific collaboration for the Brazilian science are discussed.

Journal ArticleDOI
TL;DR: The Relative Subfield Citedness (Rw) indicator proved to be the most appropriate according to the criteria chosen and increases with the number of citations to the papers and, in contrast to other relative impact indicators, does not decrease if an author chooses to publish most of his papers in journals with large impact factors.
Abstract: A model experiment is presented for thequantitative selection of relative scientometric impact indicators used in evaluating the scientific impact of papers. The Relative Subfield Citedness (Rw) indicator proved to be the most appropriate according to the criteria chosen. RW increases with the number of citations to the papers and, in contrast to other relative impact indicators, does not decrease if an author chooses to publish most of his papers in journals with large impact factors or if most of the citations to his papers are to the ones in journals with the largest impact factors.