scispace - formally typeset
Search or ask a question
Author

Lorna Wildgaard

Other affiliations: Royal Library
Bio: Lorna Wildgaard is an academic researcher from University of Copenhagen. The author has contributed to research in topics: Data steward & Work (electrical). The author has an hindex of 7, co-authored 15 publications receiving 351 citations. Previous affiliations of Lorna Wildgaard include Royal Library.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper reviews 108 indicators that can potentially be used to measure performance on individual author-level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.
Abstract: An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on individual author-level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application. As such we provide a schematic overview of author-level indicators, where the indicators are broadly categorised into indicators of publication count, indicators that qualify output (on the level of the researcher and journal), indicators of the effect of output (effect as citations, citations normalized to field or the researcher's body of work), indicators that rank the individual's work and indicators of impact over time. Supported by an extensive appendix we present how the indicators are computed, the complexity of the mathematical calculation and demands to data-collection, their advantages and limitations as well as references to surrounding discussion in the bibliometric community. The Appendix supporting this study is available online as supplementary material.

181 citations

Journal ArticleDOI
TL;DR: No clear gold standard for regional analgesia for VATS could be demonstrated, but a guide of factors to include in future studies on regional analgesian techniques is presented.
Abstract: Video-assisted thoracic surgery (VATS) is emerging as the standard surgical procedure for both minor and major oncological lung surgery. Thoracic epidural analgesia (TEA) and paravertebral block (PVB) are established analgesic golden standards for open surgery such as thoracotomy; however, there is no gold standard for regional analgesia for VATS. This systematic review aimed to assess different regional techniques with regard to effect on acute postoperative pain following VATS, with emphasis on VATS lobectomy. The systematic review of PubMed, The Cochrane Library and Embase databases yielded 1542 unique abstracts; 17 articles were included for qualitative assessment, of which three were studies on VATS lobectomy. The analgesic techniques included TEA, multilevel and single PVB, paravertebral catheter, intercostal catheter, interpleural infusion and long thoracic nerve block. Overall, the studies were heterogeneous with small numbers of participants. In comparative studies, TEA and especially PVB showed some effect on pain scores, but were often compared with an inferior analgesic treatment. Other techniques showed no unequivocal results. No clear gold standard for regional analgesia for VATS could be demonstrated, but a guide of factors to include in future studies on regional analgesia for VATS is presented.

108 citations

Journal ArticleDOI
TL;DR: Author-level bibliometric indicators are becoming a standard tool in research assessment but it is important to investigate what these indicators actually measure to assess their appropriateness in scholar ranking and benchmarking average individual levels of performance.
Abstract: Author-level bibliometric indicators are becoming a standard tool in research assessment. It is important to investigate what these indicators actually measure to assess their appropriateness in scholar ranking and benchmarking average individual levels of performance. 17 author-level indicators were calculated for 512 researchers in Astronomy, Environmental Science, Philosophy and Public Health. Indicator scores and scholar rankings calculated in Web of Science (WoS) and Google Scholar (GS) were analyzed. The indexing policies of WoS and GS were found to have a direct effect on the amount of available bibliometric data, thus indicator scores and rankings in WoS and GS were different, correlations between 0.24 and 0.99. High correlation could be caused by scholars in bottom rank positions with a low number of publications and citations in both databases. The hg indicator produced scholar rankings with the highest level of agreement between WoS and GS and rankings with the least amount of variance. Expected average performance benchmarks were influenced by how the mean indicator value was calculated. Empirical validation of the aggregate mean h-index values compared to previous studies resulted in a very poor fit of predicted average scores. Rankings based on author-level indicators are influenced by (1) the coverage of papers and citations in the database, (2) how the indicators are calculated and, (3) the assessed discipline and seniority. Indicator rankings display the visibility of the scholar in the database not their impact in the academic community compared to their peers. Extreme caution is advised when choosing indicators and benchmarks in scholar rankings.

68 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of 108 indicators that can potentially be used to measure performance on the individual author level, and examine the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.
Abstract: An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.

31 citations

Journal ArticleDOI
TL;DR: In this article, a 7-stage cluster methodology was used to identify appropriate indicators for evaluation of individual researchers at a disciplinary and seniority level, and four indicators of individual researcher performance were computed using the data.

15 citations


Cited by
More filters
01 Jan 1995
TL;DR: In this paper, the authors propose a method to improve the quality of the data collected by the data collection system. But it is difficult to implement and time consuming and computationally expensive.
Abstract: 本文对国际科学计量学杂志《Scientometrics》1979-1991年的研究论文内容、栏目、作者及国别和编委及国别作了计量分析,揭示出科学计量学研究的重点、活动的中心及发展趋势,说明了学科带头人在发展科学计量学这门新兴学科中的作用。

1,636 citations

Journal ArticleDOI
TL;DR: A longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases, suggesting that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons.
Abstract: This article aims to provide a systematic and comprehensive comparison of the coverage of the three major bibliometric databases: Google Scholar, Scopus and the Web of Science. Based on a sample of 146 senior academics in five broad disciplinary areas, we therefore provide both a longitudinal and a cross-disciplinary comparison of the three databases. Our longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases. This suggests that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons. Our cross-disciplinary comparison of the three databases includes four key research metrics (publications, citations, h-index, and hI, annual, an annualised individual h-index) and five major disciplines (Humanities, Social Sciences, Engineering, Sciences and Life Sciences). We show that both the data source and the specific metrics used change the conclusions that can be drawn from cross-disciplinary comparisons.

930 citations

Journal ArticleDOI
Ludo Waltman1
TL;DR: In this paper, an in-depth review of the literature on citation impact indicators is provided, focusing on the selection of publications and citations to be included in the calculation of citation impact indicator.

774 citations

Journal ArticleDOI
TL;DR: Martin-Martin this article was funded for a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educacion, Cultura, y Deportes (Spain).

763 citations

Journal ArticleDOI
TL;DR: Alberto Martin-Martin is funded for a four-year doctoral fellowship by the Ministerio de Educacion, Cultura, y Deportes (Spain) and an international mobility grant from Universidad de Granada and CEI BioTic Granadafunded a research stay at the University of Wolverhampton.
Abstract: Despite citation counts from Google Scholar (GS), Web of Science (WoS), and Scopus being widely consulted by researchers and sometimes used in research evaluations, there is no recent or systematic evidence about the differences between them. In response, this paper investigates 2,448,055 citations to 2,299 English-language highly-cited documents from 252 GS subject categories published in 2006, comparing GS, the WoS Core Collection, and Scopus. GS consistently found the largest percentage of citations across all areas (93%-96%), far ahead of Scopus (35%-77%) and WoS (27%-73%). GS found nearly all the WoS (95%) and Scopus (92%) citations. Most citations found only by GS were from non-journal sources (48%-65%), including theses, books, conference papers, and unpublished materials. Many were non-English (19%-38%), and they tended to be much less cited than citing sources that were also in Scopus or WoS. Despite the many unique GS citing sources, Spearman correlations between citation counts in GS and WoS or Scopus are high (0.78-0.99). They are lower in the Humanities, and lower between GS and WoS than between GS and Scopus. The results suggest that in all areas GS citation data is essentially a superset of WoS and Scopus, with substantial extra coverage.

669 citations