scispace - formally typeset
Open AccessJournal ArticleDOI

Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators

Stacy Konkiel
- Vol. 2, Iss: 1
Reads0
Chats0
TLDR
The research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators are discussed, and heuristics for policymakers and evaluators interested in doing so are suggested.
Abstract
Research data in all its diversity—instrument readouts, observations, images, texts, video and audio files, and so on—is the basis for most advancement in the sciences. Yet the assessment of most research programmes happens at the publication level, and data has yet to be treated like a first-class research object. How can and should the research community use indicators to understand the quality and many potential impacts of research data? In this article, we discuss the research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators. We acknowledge the dearth of guidance for using altmetrics and other indicators when assessing the impact and quality of research data, and suggest heuristics for policymakers and evaluators interested in doing so, in the absence of formal governmental or disciplinary policies. Policy highlights Research data is an important building block of scientific production, but efforts to develop a framework for assessing data’s impacts have had limited success to date. Indicators like citations, altmetrics, usage statistics, and reuse metrics highlight the influence of research data upon other researchers and the public, to varying degrees. In the absence of a shared definition of “quality”, varying metrics may be used to measure a dataset’s accuracy, currency, completeness, and consistency. Policymakers interested in setting standards for assessing research data using indicators should take into account indicator availability and disciplinary variations in the data when creating guidelines for explaining and interpreting research data’s impact. Quality metrics are context dependent: they may vary based upon discipline, data structure, and repository. For this reason, there is no agreed upon set of indicators that can be used to measure quality. Citations are well-suited to showcase research impact and are the most widely understood indicator. However, efforts to standardize and promote data citation practices have seen limited success, leading to varying rates of citation data availability across disciplines. Altmetrics can help illustrate public interest in research, but availability of altmetrics for research data is very limited. Usage statistics are typically understood to showcase interest in research data, but infrastructure to standardize these measures have only recently been introduced, and not all repositories report their usage metrics to centralized data brokers like DataCite. Reuse metrics vary widely in terms of what kinds of reuse they measure (e.g. educational, scholarly, etc). This category of indicator has the fewest heuristics for collection and use associated with it; think about explaining and interpreting reuse with qualitative data, wherever possible. All research data impact indicators should be interpreted in line with the Leiden Manifesto’s principles, including accounting for disciplinary variation and data availability. Assessing research data impact and quality using numeric indicators is not yet widely practiced, though there is generally support for the practice amongst researchers.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Are data repositories fettered? A survey of current practices, challenges, and future technologies

TL;DR: This study identifies the current challenges and needs for improving data repository functionalities and user experiences, and makes main recommendations for future repository systems.
Journal ArticleDOI

The data-index: An author-level metric that values impactful data and incentivizes data sharing

TL;DR: Data-index as mentioned in this paper is a new author-level metric, which values both dataset output (number of datasets) and impact(number of data-index citations), so promotes generating and sharing data as a result.
Journal ArticleDOI

Evaluación de la investigación científica: mejorando las políticas científicas en Latinoamérica

TL;DR: A revisión bibliográfica, basada en artículos sobre políticas de evaluación de la investigación científica and agendas internacionales implementadas en los últimos años (principalmente en el Reino Unido, Estados Unidos, Australia, China and Latinoamérica) as mentioned in this paper , indicate que no existe un solo método de evalueación, and that ningún indicador es absoluto.
Journal ArticleDOI

Istraživački podatci hrvatskih autora na platformi Web of Science

TL;DR: Istraživački podatci (engl. research data) as discussed by the authors, i.e., istraživanjem, prikupljaju and bilježe tijekom eksperimenta, promatranja, modeliranja, intervjua, and sl., are the most common types of prikopljaje.
References
More filters
Journal ArticleDOI

Web robot detection: A probabilistic reasoning approach

TL;DR: A Bayesian network is constructed that classifies automatically access log sessions as being crawler- or human-induced, by combining various pieces of evidence proven to characterize crawler and human behavior.
Journal ArticleDOI

Data fraud in clinical trials

TL;DR: In this article, the authors review the available evidence on the incidence of data fraud in clinical trials, describe several prominent cases, present information on motivation and contributing factors and discuss cost-effective ways of early detection of early data fraud as part of routine central statistical monitoring of data quality.
Journal ArticleDOI

Theory and practice of data citation

TL;DR: The current panorama of data citation is many-faceted and an overall view that brings together diverse aspects of this topic is still missing as discussed by the authors, however, this paper aims to describe the lay of the land for data citation, both from the theoretical the why and what and the practical the how angle.
Journal ArticleDOI

Measuring the value of research data: a citation analysis of oceanographic data sets.

TL;DR: It is suggested that all three data sets archived at the National Oceanographic Data Center are highly cited, with estimated citation counts in most cases higher than 99% of all the journal articles published in Oceanography during the same years.
Journal ArticleDOI

Image data sharing for biomedical research--meeting HIPAA requirements for De-identification.

TL;DR: The development of an open-source software suite that implements DICOM Supplement 142 as part of the National Biomedical Imaging Archive (NBIA) is described and the lessons learned are described as NBIA has acquired more than 20 image collections encompassing over 30 million images.
Related Papers (5)
Trending Questions (3)
What key factors contribute to accessing the quality of research and how different research measure the quality of research?

Key factors for assessing research quality include citations, altmetrics, usage statistics, and reuse metrics. Quality is measured by accuracy, currency, completeness, and consistency, varying across disciplines.

What are the most commonly used research and development indicators?

Citations are the most widely used indicator to showcase research impact, while altmetrics, usage statistics, and reuse metrics are also utilized, albeit with limitations in availability and standardization.

What are the benefits and limitations of using ResearchGate and Altmetrics for impact measurement?

The paper does not mention the benefits and limitations of using ResearchGate and Altmetrics for impact measurement. The paper discusses the strengths and limitations of altmetrics in general, but does not specifically address ResearchGate or provide information on its benefits or limitations.