scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators

29 Sep 2020-Vol. 2, Iss: 1
TL;DR: The research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators are discussed, and heuristics for policymakers and evaluators interested in doing so are suggested.
Abstract: Research data in all its diversity—instrument readouts, observations, images, texts, video and audio files, and so on—is the basis for most advancement in the sciences. Yet the assessment of most research programmes happens at the publication level, and data has yet to be treated like a first-class research object. How can and should the research community use indicators to understand the quality and many potential impacts of research data? In this article, we discuss the research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators. We acknowledge the dearth of guidance for using altmetrics and other indicators when assessing the impact and quality of research data, and suggest heuristics for policymakers and evaluators interested in doing so, in the absence of formal governmental or disciplinary policies. Policy highlights Research data is an important building block of scientific production, but efforts to develop a framework for assessing data’s impacts have had limited success to date. Indicators like citations, altmetrics, usage statistics, and reuse metrics highlight the influence of research data upon other researchers and the public, to varying degrees. In the absence of a shared definition of “quality”, varying metrics may be used to measure a dataset’s accuracy, currency, completeness, and consistency. Policymakers interested in setting standards for assessing research data using indicators should take into account indicator availability and disciplinary variations in the data when creating guidelines for explaining and interpreting research data’s impact. Quality metrics are context dependent: they may vary based upon discipline, data structure, and repository. For this reason, there is no agreed upon set of indicators that can be used to measure quality. Citations are well-suited to showcase research impact and are the most widely understood indicator. However, efforts to standardize and promote data citation practices have seen limited success, leading to varying rates of citation data availability across disciplines. Altmetrics can help illustrate public interest in research, but availability of altmetrics for research data is very limited. Usage statistics are typically understood to showcase interest in research data, but infrastructure to standardize these measures have only recently been introduced, and not all repositories report their usage metrics to centralized data brokers like DataCite. Reuse metrics vary widely in terms of what kinds of reuse they measure (e.g. educational, scholarly, etc). This category of indicator has the fewest heuristics for collection and use associated with it; think about explaining and interpreting reuse with qualitative data, wherever possible. All research data impact indicators should be interpreted in line with the Leiden Manifesto’s principles, including accounting for disciplinary variation and data availability. Assessing research data impact and quality using numeric indicators is not yet widely practiced, though there is generally support for the practice amongst researchers.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This study identifies the current challenges and needs for improving data repository functionalities and user experiences, and makes main recommendations for future repository systems.
Abstract: This is an accepted manuscript of an article published by Emerald in Online Information Review on 24/08/2021, available online at: https://doi.org/10.1108/OIR-04-2021-0204 The accepted version of the publication may differ from the final published version.

6 citations

Journal ArticleDOI
TL;DR: Data-index as mentioned in this paper is a new author-level metric, which values both dataset output (number of datasets) and impact(number of data-index citations), so promotes generating and sharing data as a result.
Abstract: Author-level metrics are a widely used measure of scientific success. The h-index and its variants measure publication output (number of publications) and research impact (number of citations). They are often used to influence decisions, such as allocating funding or jobs. Here, we argue that the emphasis on publication output and impact hinders scientific progress in the fields of ecology and evolution because it disincentivizes two fundamental practices: generating impactful (and therefore often long-term) datasets and sharing data. We describe a new author-level metric, the data-index, which values both dataset output (number of datasets) and impact (number of data-index citations), so promotes generating and sharing data as a result. We discuss how it could be implemented and provide user guidelines. The data-index is designed to complement other metrics of scientific success, as scientific contributions are diverse and our value system should reflect that both for the benefit of scientific progress and to create a value system that is more equitable, diverse, and inclusive. Future work should focus on promoting other scientific contributions, such as communicating science, informing policy, mentoring other scientists, and providing open-access code and tools.

5 citations

Journal ArticleDOI
TL;DR: A revisión bibliográfica, basada en artículos sobre políticas de evaluación de la investigación científica and agendas internacionales implementadas en los últimos años (principalmente en el Reino Unido, Estados Unidos, Australia, China and Latinoamérica) as mentioned in this paper , indicate que no existe un solo método de evalueación, and that ningún indicador es absoluto.
Abstract: Este artículo busca identificar criterios e indicadores de evaluación científica, que permitan mejorar la forma en que las agencias de financiación, las instituciones académicas y otros grupos evalúan la calidad e impacto de la investigación. Para ello, se realiza una revisión bibliográfica, basada en artículos sobre políticas de evaluación de la investigación científica y agendas internacionales implementadas en los últimos años (principalmente en el Reino Unido, Estados Unidos, Australia, China y Latinoamérica). Los resultados indican que no existe un solo método de evaluación científica, ya que ningún indicador es absoluto. Cada investigación posee actores distintos que deben ser considerados y se debe valorar la investigación en su contexto. Se recomienda un sistema de evaluación mixto, que incorpore criterios cuantitativos y cualitativos, pero que reconozca los límites y alcances de ambos y también de cada disciplina.

4 citations

Journal ArticleDOI
01 Jan 2021
TL;DR: Istraživački podatci (engl. research data) as discussed by the authors, i.e., istraživanjem, prikupljaju and bilježe tijekom eksperimenta, promatranja, modeliranja, intervjua, and sl., are the most common types of prikopljaje.
Abstract: Iz velikog broja znanstvenih istraživanja danas proizlazi ogromna količina različitih podataka do kojih se dolazi s pomoću novih metoda i opreme, koji se pohranjuju u različitim oblicima i s kojima se upravlja na različite načine.1 Podatci iz istraživanja mogu sadržavati kvantitativne informacije ili kvalitativne tvrdnje koje istraživači prikupljaju i bilježe tijekom eksperimenta, promatranja, modeliranja, intervjua i sl., ili to može biti informacija proizišla iz postojećih dokaza.2 U područjima prirodnih znanosti najveći broj podataka prikuplja se promatranjem i eksperimentima, u društvenim znanostima vlastitim opažanjima ili preuzimanjem iz drugih javnih izvora (primjerice, podatci o ekonomskoj aktivnosti), a u humanističkim znanostima podatci se najčešće izvlače iz kulturne baštine, arhivskoga gradiva, artefakata i sl.3 Istraživački podatci (engl. research data) mogu biti objedinjeni i pohranjeni u tzv. sirovom ili obrađenom obliku u datotekama, mogu tvoriti modele, algoritme, protokole ili biti u obliku programirane računalne potpore analizi podataka.

3 citations

References
More filters
Journal ArticleDOI
TL;DR: This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.
Abstract: The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.

1,797 citations

Journal ArticleDOI
TL;DR: Concepts that can help organizations develop usable data quality metrics are described that are suitable for use in practice and not just on a whim.
Abstract: How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.

1,566 citations

Journal ArticleDOI
TL;DR: A leading computer industry information service firm indicated that it “expects most business process reengineering initiatives to fail through lack of attention to data quality”.
Abstract: of an organization. A leading computer industry information service firm indicated that it “expects most business process reengineering initiatives to fail through lack of attention to data quality.” An industry executive report noted that more than 60% of surveyed firms (500 medium-size corporations with annual sales of more than $20 million) had problems with data quality. The Wall Street Journal also reported that, “Thanks to computers, huge databases brimming with information are at our fingertips, just waiting to be tapped. They can be mined to find sales Anchoring Data Quality Dimensions Ontological Foundations

1,468 citations

Journal ArticleDOI
23 Apr 2015-Nature
TL;DR: Zehn Grundsatze um Forschung zu bewerten, drangen Diana Hicks, Paul Wouters und Kollegen einiges zusammen wirkt.
Abstract: Nutzen Sie diese zehn Grundsatze um Forschung zu bewerten, drangen Diana Hicks, Paul Wouters und Kollegen.

1,437 citations

Journal ArticleDOI
TL;DR: Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, thetypes of information systems addressed by each methodology.
Abstract: The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.

1,048 citations

Trending Questions (3)
What key factors contribute to accessing the quality of research and how different research measure the quality of research?

Key factors for assessing research quality include citations, altmetrics, usage statistics, and reuse metrics. Quality is measured by accuracy, currency, completeness, and consistency, varying across disciplines.

What are the most commonly used research and development indicators?

Citations are the most widely used indicator to showcase research impact, while altmetrics, usage statistics, and reuse metrics are also utilized, albeit with limitations in availability and standardization.

What are the benefits and limitations of using ResearchGate and Altmetrics for impact measurement?

The paper does not mention the benefits and limitations of using ResearchGate and Altmetrics for impact measurement. The paper discusses the strengths and limitations of altmetrics in general, but does not specifically address ResearchGate or provide information on its benefits or limitations.