scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Multidisciplinary bibliographic databases.

TL;DR: The originator of the idea, Eugene Garfield, formulated several critical points in bibliometrics that have shaped citation indexes, for example, libraries with limited funding should be selective about the journals they acquire and a bibliography should selectively cover 'high quality' sources.
Abstract: The past five decades have witnessed the so-called data deluge and publication explosion across all branches of science (1). Numerous academic journals have been launched that use a systematic approach to the submission, peer review, and publishing of information. To facilitate the wide use of published sources, libraries across the world have expanded cataloguing and advanced literature search techniques. The first major step towards indexing academic journals and helping libraries acquire the most influential sources was made by the Institute for Scientific Information (ISI) in Philadelphia, USA, in 1960. The idea behind indexing and distributing information on published articles was to facilitate scientific communication between authors and readers (2). In other words, indexing was proposed as a tool for finding relevant sources of interest to the consumers. The originator of the idea, Eugene Garfield, also the founder of the ISI, formulated several critical points in bibliometrics that have shaped citation indexes, for example, libraries with limited funding should be selective about the journals they acquire; most read and highly cited journals constitute 'quality' sources; highly cited articles influence science; citations from highly-cited journals are weighed more than those from low-cited ones; and a bibliography should selectively cover 'high quality' sources.

Content maybe subject to copyright    Report

Citations
More filters
01 Feb 2009

911 citations

Journal ArticleDOI
TL;DR: This article overviews unethical publishing practices in connection with the pressure to publish more, and several measures are proposed to tackle the issue of predatory publishing.
Abstract: This article overviews unethical publishing practices in connection with the pressure to publish more. Both open-access and subscription publishing models can be abused by ‘predatory’ authors, editors, and publishing outlets. Relevant examples of ‘prolific’ scholars are viewed through the prism of the violation of ethical authorship in established journals and indiscriminately boosting publication records elsewhere. The instances of ethical transgressions by brokering editorial agencies and agents, operating predominantly in non-Anglophone countries, are presented to raise awareness of predatory activities. The scheme of predatory publishing activities is presented, and several measures are proposed to tackle the issue of predatory publishing. The awareness campaigns by professional societies, consultations with information facilitators, implementation of the criteria of best target journals, and crediting of scholars with use of integrative citation metrics, such as the h-index, are believed to make a difference.

61 citations


Cites background from "Multidisciplinary bibliographic dat..."

  • ...The choice of a bibliographic database for recording the h-index depends on the indexing status of journals in a given discipline, peculiarities of research environments and regional priorities, with Scopus viewed as the most comprehensive platform for authors from Europe and non-Anglophone countries (36)....

    [...]

Journal ArticleDOI
TL;DR: The study suggests that the intensified self-correction in biomedicine is due to the attention of readers and authors, who spot errors in their hub of evidence-based information.
Abstract: Aim To analyze mistakes and misconduct in multidisciplinary and specialized biomedical journals.

59 citations

Journal ArticleDOI
TL;DR: It is found that WoS, INSPEC and Scopus provided better quality indexing and better bibliographic records in terms of accuracy, control and granularity of information, when compared to GS and DBLP.
Abstract: We compared general and specialized databases, by searching bibliographic information regarding journal articles in the computer science field, and by evaluating their bibliographic coverage and the quality of the bibliographic records retrieved. We selected a sample of computer science articles from an Italian university repository (AIR) to carry out our comparison. The databases selected were INSPEC, Scopus, Web of Science (WoS), and DBLP. We found that DBLP and Scopus indexed the highest number of unique articles (4.14 and 4.05 % respectively), that each of the four databases indexed a set of unique articles, that 12.95 % of the articles sampled were not indexed in any of the databases selected, that Scopus was better than WoS for identifying computer science publications, and that DBLP had a greater number of unique articles indexed (19.03 %), when compared to INSPEC (11.28 %). We also measured the quality of a set of bibliographic records, by comparing five databases: Scopus, WoS, INSPEC, DBLP and Google Scholar (GS). We found that WoS, INSPEC and Scopus provided better quality indexing and better bibliographic records in terms of accuracy, control and granularity of information, when compared to GS and DBLP. WoS and Scopus also provided more sophisticated tools for measuring trends of scholarly publications.

58 citations


Cites background from "Multidisciplinary bibliographic dat..."

  • ...Keywords Web of Science Scopus DBLP INSPEC Google Scholar...

    [...]

  • ...Other studies also found that GS required extra analyses of the retrieved citing sources, to single out the irrelevant and non-scholarly materials (Gasparyan et al. 2013)....

    [...]

  • ...The research question was: is there a need of using multiple databases for searching computer science articles?...

    [...]

  • ...Over the past few years GS has significantly expanded its indexing of full texts of scholarly literature through agreements with publishers (like Elsevier), online libraries and repositories (Gasparyan et al. 2013)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: With some improvement in the research options, Google Scholar could become the leading bibliographic database in medicine and could be used alone for systematic reviews.
Abstract: In searches for clinical trials and systematic reviews, it is said that Google Scholar (GS) should never be used in isolation, but in addition to PubMed, Cochrane, and other trusted sources of information. We therefore performed a study to assess the coverage of GS specifically for the studies included in systematic reviews and evaluate if GS was sensitive enough to be used alone for systematic reviews. All the original studies included in 29 systematic reviews published in the Cochrane Database Syst Rev or in the JAMA in 2009 were gathered in a gold standard database. GS was searched for all these studies one by one to assess the percentage of studies which could have been identified by searching only GS. All the 738 original studies included in the gold standard database were retrieved in GS (100%). The coverage of GS for the studies included in the systematic reviews is 100%. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed. With some improvement in the research options, to increase its precision, GS could become the leading bibliographic database in medicine and could be used alone for systematic reviews.

256 citations


"Multidisciplinary bibliographic dat..." refers background in this paper

  • ...A study comparing Google Scholar with PubMed and Cochrane Library searches for coverage of the literature for top systematic reviews in medicine proved that searches through Google Scholar alone are sufficient for retrieving all the necessary sources (16)....

    [...]

Journal ArticleDOI
TL;DR: This paper compares and contrasts a variety of test searches in PubMed and Google Scholar to gain a better understanding of Google Scholar's searching capabilities.
Abstract: Google Scholar has been met with both enthusiasm and criticism since its introduction in 2004. This search engine provides a simple way to access “peer-reviewed papers, theses, books, abstracts, and articles from academic publishers' sites, professional societies, preprint repositories, universities and other scholarly organizations” [1]. An obvious strength of Google Scholar is its intuitive interface, as the main search engine interface consists of a simple query box. In contrast, databases, such as PubMed, utilize search interfaces that offer a greater variety of advanced features. These additional features, while powerful, often lead to a complexity that may require a substantial investment of time to master. It has been observed that Google Scholar may allow searchers to “find some resources they can use rather than be frustrated by a database's search screen” [2]. Some even feel that “Google Scholar's simplicity may eventually consume PubMed” [3]. Along with ease of use, Google Scholar carries the familiar “Google” brand name. As Kennedy and Price so aptly stated, “College students AND professors might not know that library databases exist, but they sure know Google” [4]. The familiarity of Google may allow librarians and educators to ease students into the scholarly searching process by starting with Google Scholar and eventually moving to more complex systems. Felter noted that “as researchers work with Google Scholar and reach limitations of searching capabilities and options, they may become more receptive to other products” [5]. Google Scholar is also thought to provide increased access to gray literature [2], as it retrieves more than journal articles and includes preprint archives, conference proceedings, and institutional repositories [6]. Google Scholar also includes links to the online collections of some academic libraries. Including these access points in Google Scholar retrieval sets may ultimately help more users reach more of their own institution's subscriptions [7]. While its advantages are substantial, Google Scholar is not without flaws. The shortcomings of the system and its search interface have been well documented in the literature and include lack of reliable advanced search functions, lack of controlled vocabulary, and issues regarding scope of coverage and currency. Table 1 summarizes some of the reported criticisms of Google Scholar. Table 1 Criticisms of Google Scholar Vine found that while Google Scholar pulls in data from PubMed, many PubMed records are missing [20], and that Google Scholar also lacks features available in MEDLINE [12]. Others have noted that Google Scholar should not be the first or sole choice when searching for patient care information, clinical trials, or literature reviews [23,24]. Thorough review and testing of Google Scholar, being an approach similar to that used to evaluate licensed resources, is necessary to better understand its strengths and limitations. As Jacso states, “professional searchers must do sample test searches and correctly interpret the results to corroborate claims and get factual information about databases” [18]. This paper compares and contrasts a variety of test searches in PubMed and Google Scholar to gain a better understanding of Google Scholar's searching capabilities.

215 citations


"Multidisciplinary bibliographic dat..." refers methods in this paper

  • ...Despite its comprehensiveness, searches through Google Scholar may retrieve irrelevant and non-scholarly materials, making it mandatory to critically analyse each retrieved source and to perform additional searches through WoS, Scopus, or specialised databases (19)....

    [...]

Journal ArticleDOI
TL;DR: The findings suggest that the use of MEDLINE alone to identify CCTs is inadequate, and theUse of two or more databases and hand searching of selected journals are needed to perform a comprehensive search.

176 citations


"Multidisciplinary bibliographic dat..." refers background in this paper

  • ...However, more extensive coverage does not necessarily mean more quality items, and this is why it is recommended that EMBASE is complemented by MEDLINE and/or other evidence-based databases (26)....

    [...]

  • ...Several studies have found that EMBASE covers controlled clinical trials more comprehensively than MEDLINE....

    [...]

  • ...Prime examples are Medical Subject Headings (MeSH) and EMtree collections of keywords utilised by Medical Literature Analysis and Retrie- val System Online (MEDLINE; US National Library of Medicine) and EMBASE (Elsevier), respectively....

    [...]

  • ...As a good example, in a landmark study on bibliographic performance of rheumatology in MEDLINE, EMBASE, and BIOSIS, 45% of papers on hot topics in the field were found in non-rheumatology journals and each of these databases was capable of retrieving no more than 50% of the relevant citations (28)....

    [...]

  • ...EMBASE is the largest subscription-based biomedical and pharmacological abstracts database....

    [...]

Journal ArticleDOI
TL;DR: Some newer alternative journal metrics such as SCImago Journal Rank and the h-index are presented and examples of their application in several subject categories are analyzed and misuses of JIF are discussed.
Abstract: The highly popular journal impact factor (JIF) is an average measure of citations within 1 year after the publication of a journal as a whole within the two preceding years. It is widely used as a proxy of a journal's quality and scientiWc prestige. This article discusses misuses of JIF to assess impact of separate journal articles and the eVect of several manuscript versions on JIF. It also presents some newer alternative journal metrics such as SCImago Journal Rank and the h-index and analyses examples of their appli- cation in several subject categories.

152 citations


"Multidisciplinary bibliographic dat..." refers methods in this paper

  • ...The Web of Knowledge platform aggregates information from another highly prestigious product of Thomson Reuters, namely Current Contents Connect® (CCC)....

    [...]

  • ...Citations in WoS are also used for the calculation of the h index of individual researchers and are displayed at the ResearcherID authoridentifying platform of Thomson Reuters (from 2008)....

    [...]

  • ...The results of the citation analyses through SCI-E, SSCI and AHCI are published annually in the Journal Citation Reports® (JCR), a product of Thomson Reuters, which includes the highly popular Journal Impact Factor (JIF) and other indices used for journal rankings in specific subject categories....

    [...]

  • ...The Web of Science® (WoS) is the oldest subscription-based citation index for more than 250 disciplines, and it is provided by Thomson Reuters (formerly the Institute for Scientific Information, Philadelphia, USA)....

    [...]

  • ...The SCImago laboratory in Spain relies on citations in Scopus for regularly calculating open-access metrics such as the SCImago Journal Rank (SJR) and average citations per paper over a 2-yr period (Cites per Doc 2y), which are widely viewed as alternatives to the Eigenfactor score and JIF computed by Thomson Reuters (13)....

    [...]

Journal Article
TL;DR: Overall, EMBASE provides twice as many citations per search as MEDLINE and provides greater coverage of total retrieved citations, and more citations do not necessarily mean higher-quality citations.
Abstract: OBJECTIVE Many physicians access electronic databases to obtain up-to-date and reliable medical information. In North America, physicians typically use MEDLINE as their sole electronic database whereas in Europe, physicians typically use EMBASE. While MEDLINE and EMBASE are similar, their coverage of the published literature differs. Searching a single literature database (eg, MEDLINE or EMBASE) has been shown not to yield all available citations, and using two or more databases yields a greater percentage of these available citations. This difference has been demonstrated in a variety of disciplines and in family medicine using the term "family medicine," but differences have not been shown using specific diagnostic terms common in family medicine. We sought to determine whether searching EMBASE with terms for common family medicine diagnoses yields additional references beyond those found by using MEDLINE alone. DESIGN Literature search comparison. SETTING An academic medical centre in the United States. INTERVENTIONS Fifteen family medicine topics were selected based on common diagnoses in US primary care health visits as described in a National Health Care Survey on Ambulatory Care Visits. To promote relevance to family medicine physicians and researchers, the qualifiers "family medicine" and "therapy/therapeutics" were added. These topics were searched in EMBASE and MEDLINE. Searches were executed using Ovid search engine and were limited to the years 1992 to 2003, the English language, and human subjects. Total, duplicated, and unique (ie, nonduplicated) citations were recorded for each search in each database. MAIN OUTCOME MEASURES Number of citations for the 15 topics. RESULTS EMBASE yielded 2246 (65%) of 3445 total citations, whereas MEDLINE yielded 1199 citations. Of the total citations, only 177 articles were cited in both databases. EMBASE had 2092 unique citations to MEDLINE9s 999 unique citations. EMBASE consistently found more unique citations in 14 of the 15 searches (P = .0005). CONCLUSION Overall, EMBASE provides twice as many citations per search as MEDLINE and provides greater coverage of total retrieved citations. More citations do not necessarily mean higher-quality citations. In a comprehensive search specific to family medicine, combined EMBASE and MEDLINE searches could yield more articles than MEDLINE could alone.

79 citations


"Multidisciplinary bibliographic dat..." refers background in this paper

  • ...However, more extensive coverage does not necessarily mean more quality items, and this is why it is recommended that EMBASE is complemented by MEDLINE and/or other evidence-based databases (26)....

    [...]