scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comparison of research data management platforms: architecture, flexible metadata and interoperability

TL;DR: A synthetic overview of current platforms that can be used for data management purposes and shows that there is still plenty of room for improvement, mainly regarding the specificity of data description in different domains, as well as the potential for integration of the data management platforms with existing research management tools.
Abstract: Research data management is rapidly becoming a regular concern for researchers, and institutions need to provide them with platforms to support data organization and preparation for publication. Some institutions have adopted institutional repositories as the basis for data deposit, whereas others are experimenting with richer environments for data description, in spite of the diversity of existing workflows. This paper is a synthetic overview of current platforms that can be used for data management purposes. Adopting a pragmatic view on data management, the paper focuses on solutions that can be adopted in the long tail of science, where investments in tools and manpower are modest. First, a broad set of data management platforms is presented—some designed for institutional repositories and digital libraries—to select a short list of the more promising ones for data management. These platforms are compared considering their architecture, support for metadata, existing programming interfaces, as well as their search mechanisms and community acceptance. In this process, the stakeholders’ requirements are also taken into account. The results show that there is still plenty of room for improvement, mainly regarding the specificity of data description in different domains, as well as the potential for integration of the data management platforms with existing research management tools. Nevertheless, depending on the context, some platforms can meet all or part of the stakeholders’ requirements.

Summary (2 min read)

1 Introduction

  • The number of published scholarly papers is steadily increasing, and there is a growing awareness of the importance, diversity and complexity of data generated in research contexts [25].
  • Implementation costs, architecture, interoperability, content dissemination capabilities, implemented search features and community acceptance are also taken into consideration.
  • This evaluation considers aspects relevant to the authors’ ongoing work, focused on finding solutions to research data management, and takes into consideration their past experience in this field [33].
  • Moreover, the authors look at staging platforms, which are especially tailored to capture metadata records as they are produced, offering researchers an integrated environment for their management along with the data.
  • As datasets become organised and described, their value and their potential for reuse will prompt further preservation actions.

3 Scope of the analysis

  • The stakeholders in the data management workflow can greatly influence whether research data is reused.
  • The selection of platforms in the analysis acknowledges their role, as well as the importance of the adoption of community standards to help with data description and management in the long run.
  • On the other hand, such solutions are usually harder to install and maintain by institutions in the so-called long tail of science—institutions that create large numbers of small datasets, though do not possess the necessary financial resources and preservation expertise to support a complete preservation workflow [18].
  • The Fedora framework3 is used by some institutions, and is also under active development, with the recent release of Fedora 4.
  • The former includes aspects such as how they are deployed into a production environment, the locations where they keep their data, whether their source code is available, and other aspects that are related to the compliance with preservation best practices.

4 Platform comparison

  • Based on the selection of the evaluation scope, this section addresses the comparison of the platforms according to key features that can help in the selection of a platform for data management.
  • Adopting a dynamic approach to data management, tasks can be made easier for the researchers, and motivate them to use the data management platform as part of their daily research activities, while they are working on the data.
  • This platform is flexible, available under an open-source license, and compatible with several metadata representations, while still providing a complete API.
  • While the evaluated platforms have different description requirements upon deposit, most of them lack the support for domainspecific metadata schemas.
  • This search feature makes it easier for researchers to find the datasets that are from relevant domains and belong to specific collections or similar dataset categories (the concept varies between platforms as they have different organizational structures).

5 Data staging platforms

  • Most of the analyzed solutions target data repositories, i.e. the end of the research workflow.
  • These requirements have been identified by several research and data management institutions, who have implemented integrated solutions for researchers to manage data not only when it is created, but also throughout the entire research workflow.
  • It provides researchers with 20GB of storage for free, and is integrated with other modules for dataset sharing and staging, including some computational processing on the stored data.
  • Dendro is a single solution targeted at improving the overall availability and quality of research data.
  • Curators can expand the platform’s data model by loading ontologies that specify domain-specific or generic metadata descriptors that can then be used by researchers in their projects.

6 Conclusion

  • The evaluation showed that it can be hard to select a platform without first performing a careful study of the requirements of all stakeholders.
  • Its features and the extensive API making it also possible to use this repository to manage research data, making use of its keyvalue dictionary to store any domain-level descriptors.
  • A very important factor to consider is also the control over where the data is stored.
  • The authors consider that these solutions should be compared to other collaborative solutions such as Dendro, a research data mana- gement solution currently under development.
  • This should, of course, be done while taking into consideration available metadata standards that can contribute to overall better conditions for long-term preservation [36].

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Noname manuscript No.
(will be inserted by the editor)
Acomparisonofresearchdatamanagementplatforms
Architecture, flexible metadata and interoperability
Ricardo Carvalho Amorim, João Aguiar Castro, João Rocha da Silva,
Cristina Ribeiro
Received: date / Accepted: date
Abstract Research data management is rapidly be-
coming a regular concern for researchers, and institu-
tions need to provide them with platforms to support
data organization and preparation for publication. Some
institutions have adopted institutional repositories as
the basis for data deposit, whereas others are experi-
menting with richer environments for data description,
in spite of the diversity of existing workflows. This pa-
per is a synthetic overview of current platforms that
can be used for data management purposes. Adopt-
ing a pragmatic view on data management, the paper
focuses on solutions that can be adopted in the long-
tail of science, where investments in tools and man -
power are modest. First, a broad set of data mana-
gement platforms is presented—some designed for in-
stitutional repositories and digital libraries—to select
ashortlistofthemorepromisingonesfordatama-
nagement. These platforms are compared considering
This paper is an extended version of a previously published
comparative study. Please refer to the WCIST 2015 confer-
ence proceedings (doi: 10.1007/978-3-319-16486-1)
Ricardo C arvalho Amorim
INESC TEC—Faculdade de Eng enharia da Universidade do
Porto
E-mail: ricardo.amorim3@gmail.com
João Aguiar Castro
INESC TEC—Faculdade de Engenharia da Universidade do
Porto
E-mail: joaoaguiarcastro@gmail.com
João Rocha da Silva
INESC TEC—Faculdade de Engenharia da Universidade do
Porto
E-mail: joaorosilva@gmail.com
Cristina R ibeiro
INESC TEC—Faculdade de Engenharia da Universidade do
Porto
E-mail: mcr@fe.up.pt
their architecture, support for metadata, existing pro-
gramming interfaces, as well as their search mechanisms
and community acceptance. In this pro c e ss, the stake-
holders’ requirements are also taken into account. The
results show that there is still plenty of room for im -
provement, mainly regarding the specificity of data de-
scription in dierent domains, as well as the potential
for integration of the data management platforms with
existing research management tools. Nevertheless, de-
pending on the context, some platforms can meet all or
part of the stakeholders’ requirements.
1 Introduction
The number of published scholarly papers is steadily
increasing, and there is a growing awareness of the im-
portance, diversity and complexity of data generated
in research contexts [25]. The management of these as-
sets is currently a concern for both researchers and in-
stitutions who have to streamline scholarly communi-
cation, while keeping record of research contributions
and ensuring the correct licensing of their contents [23,
18]. At the same time, academic institutions have new
mandates, requiring data management activities to be
carried out during the research projects, as a part of
research grant contracts [14,26]. These activities are
invariably supported by software platforms, increasing
the demand for such infrastructure s .
This paper presents an overview of several promi-
nent research data management platforms that can be
put in place by an institution to support part of its
research data management workflow. It starts by iden-
tifying a set of well known repositories that are cur-
rently being used for either publications or data ma-
nagement, discussing their use in several research in-
This is a post-peer-review, pre-copyedit version of an article published in
Universal Access in the Information Society. The final authenticated version is available online at:
https://doi.org/10.1007/s10209-016-0475-y

2 Ricardo C arvalho Amorim, João Aguiar Castro, João Rocha da Silva, Cristina Ribeiro
stitutions. Then, focus moves to their fitne ss to han-
dle research data, namely their domain-specific meta-
data requirements and preservation guidelines. Imple-
mentation costs, architecture, interoperability, content
dissemination capabilities, implemented search features
and community acceptance are also taken into consider-
ation. When faced with the many alternatives currently
av ailable, it can be dicult for institutions to choose a
suitable platform to meet their specific requirements.
Several comparative studies between existing solutions
were already carried out in order to evaluate dierent
aspects of each implementation, confirming that this is
an issue with increasing importance [16,3,6]. This eval-
uation considers aspects relevant to the authors’ ongo-
ing work, focused on finding solutions to research data
management, and takes into consideration their past ex-
perience in this field [33]. This experience has provided
insights on specific, local needs that can influence the
adoption of a platform and therefore the success in its
deployment.
It is clear that the eort in creating metadata for
research datasets is very dierent from what is required
for research publications. While publications can be ac-
curately described by librarians, good quality metadata
for a dataset requires the contribution of the researchers
involved in its production. Their knowledge of the do-
main is required to adequately document the dataset
production context so that others can reuse it. Involv-
ing the researchers in the deposit stage is a challenge, as
the investment in metadata production for data publi-
cation and sharing is typically higher than that required
for the addition of notes that are only intended for their
peers in a research group [7].
Moreover, the authors look at staging platforms,
which are especially tailored to capture metadata re-
cords as they are produced, oering researchers an in-
tegrated environment for their manage m ent along with
the data. As this is an area with several proposals in
active development, EUDAT, which includes tools for
data staging, and Dendro, a platform proposed for en-
gaging researchers in data description, taking into ac-
count the need for data and metadata organisation will
be contemplated.
Staging platforms are capable of exporting the en-
closed datasets and metadata records to research data
repositories. The platforms selected for the analysis in
the sequel as candidates for u s e are considered as re-
search data management repositories for datasets in
the long tail of science , as they are designed with shar-
ing and dissemination in mind. Together, staging plat-
forms and research data repositories provide the tools to
handle the stages of the research workflow. Long-term
preservation imposes further requirements, and other
tools may be necessary to satisfy th e m. However, as da-
tasets become organised and described, their value and
their potential for reuse will prompt further preserva-
tion actions.
2 From publications to data management
The growth in the number of research publications,
combined with a strong drive towards open access poli-
cies [8,10], continue to foster the development of open-
source platforms for managing bibliographic records.
While data citation is not yet a widespread practice, the
importance of citable datasets is growing. Until a cul-
ture of data citation is widely adopted, however, many
research groups are opting to pu blish so-called “data
papers”, which are more easily citable than datasets.
Data pape rs serve not only as a reference to datasets
but also document their production context [9].
As data management becomes an increasin gly im-
portant part of the research workflow [24], solutions de-
signed for managing research data are being actively
developed by both open-source communities and data
management-related companies. As with institutional
repositories, many of their design and development chal-
lenges have to do with description and long-term preser-
vation of research data. There are, however, at least
two fundamental dierences between publications and
datasets: the latter are often purely numeric, making
it very hard to derive any type of metadata by sim-
ply looking at their contents; also, datasets require de-
tailed, domain-specific des c riptions to be corre ctly in-
terpreted. Metadata requ ire ments can also vary greatly
from domain to domain, requiring repository data mod-
els to be flexible enough to adequately represent these
records [35]. The eort invested in adequate dataset
description is worthwhile, since it has been shown that
research publications that provide access to their base
data consistently yield higher citation rates than those
that do not [27].
As these rep ositories deal with a reasonably small
set of managed formats for deposit, several reference
models, such as the OAIS (Open Archival Information
System) [12]arecurrentlyinusetoensurepreservation
and to promote metadata interchange and dissemina-
tion. Besides capturing the available metadata during
the ingestion process, data re positories often distribute
this information to other instances, improving the pub-
lications’ visibility through specialised research search
engines or repository indexers. While the former focus
on querying each repository f or exposed contents, the
latter help users find data repositories that match their
needs—such as repositories from a specific domain or
storing data from a specific community. Governmental

A comparison of research data management platforms 3
institutions are also promoting the d isclosure of open
data to improve citizen commitment and government
transparency, and this motivates the use of data mana-
gement platforms in this context.
2.1 An overview on existing repositories
While depositing and accessing publications from dif-
ferent domains is already possible in most institutions,
ensuring the same level of accessibility to data resources
is s till challenging, and d i erent solutions are being ex-
perimented to expose and share data in some communi-
ties. Addressing this issue, we synthesize a preliminary
classification of these solutions according to their spe-
cific purpose: they are either targeting staging, early
research activities or managing deposited datasets and
making them available to the community.
Table 1 identifies features of the selected platforms
that may render them convenient for data management.
To build the table, the authors resorted to the docu-
mentation of the platforms, and to basic experiments
with demonstration instances, whenever available. In
the firs t column, under “Registered repositories”, is the
number of running instances of each platform, accord-
ing to the OpenDOAR platform as of mid-October 2015.
In the analysis, five evaluation criteria that can be
relevant for an institution to make a coarse-grained as-
sessment of the solutions are considered. Some exist-
ing tools were excluded from this first analysis, mainly
because some of their characteristics place them out-
side of the scope of this work. This is the case of plat-
forms specifically targeting research publications (and
that cannot be easily modified for managing data), and
heavy-weight platforms targeted at long-term preserva-
tion. Also excluded were those that, from a technical
point of view, do not comply with de sirable require-
ments for this domain such as adopting an open-source
approach, or providing access to th eir features via com-
prehensive APIs.
By comparing the number of existing installations,
it is natural to assume that a large number of instance s
for a platform is a goo d indication of the existence of
support for its implementation. Repositories such as
DSpace are widely used among institutions to manage
publications. Therefore, institutions using DSpace to
manage publications can use their support for the plat-
form to expand or replicate the repository and meet
additional requirements.
It is important to mention that some repositories
do not implement interfaces with existing repository
indexers, and this may cause the OpenDOAR statistics
to show a value lower than the actual number of e xis ting
installations. More over, services provided by EUDAT,
Figshare and Zenodo, for instance, consis t of a single
installation that receives all the deposited data, rather
than a distributed array of manageable ins tallation s.
Government-supported platforms such as CKAN are
currently being used as part of the open government ini-
tiatives in several countries, allowing the disclosure of
data related to sensitive issues such as budget execu-
tion, and their aim is to vouch f or transparency and
credibility towards tax payers [ 21,20]. Although not
specifically tailored to meet research data management
requirements, these data-focus ed repositories also count
with an increasing number of instances supporting com-
plex research data management workflows [38], even at
universities
1
.
Access to the source code can also be a valuable cri-
terion for selecting a platform, primarily to avoid ven-
dor lock-in, which is usually associated with commer-
cial software or other provided services. Vendor lock-
in is undesirable from a preservation point of view as
it places th e maintenance of the platform (and conse-
quently the data stored insid e) in the hands of a single
vendor, that may not be able to provide support indef-
initely. The availability of the a platform’s source code
also allows additional modifications to be carried out
in order to create customized workflows—examples in-
clude improved metadata capabilities and data brows-
ing functionalities. Commercial solutions such as Con-
tentDM may incur high costs for the subscription fees,
which can make them cost-prohibitive for non-profit or-
ganizations or small research institutions. In some cases
only a small portion of the source code for the entire
solution is actually available to the public. This is the
case with EUDAT, where only the B2Share modu le is
currently open
2
—the re main in g modules are unavail-
able to date.
From an integration point of view, the existence of
an API can allow for further development and help with
the repository maintenance, as the software ages. Solu-
tions that do not, at least partially, comply with this
requirement, may hinder the integration with external
platforms to improve the visibility of existing contents.
The lack of an API creates a barrier to the development
of tools to support a platform in specific environments,
such as laboratories that frequently produce data to
be directly deposited and disclosed. Finally, regarding
long-term preservation, some platforms fail to provide
unique identifiers for the resources upon deposit, mak-
ing persistent references to data and data citation in
publications hard.
1
http://ckan.org/2013/11/28/ckan4rdm-st-andrews/
2
Source code repository for B2Share is hoste d via GitHub
at https://github.com/EUDAT-B2SHARE/b2share

4 Ricardo C arvalho Amorim, João Aguiar Castro, João Rocha da Silva, Cristina Ribeiro
Table 1: Limitations of the identified repository solutions. Source:
5
OpenDOAR platform
4
Corresponding web-
site.
Only available through additional plug-ins.
Only partially.
Registered
rep osito ries
5
Closed
source
No
API
No unique
identifiers
Complex
installation or setup
No OAI-PMH
compliance
CKAN 139
4
5
5
ContentDM 53 5
Dataverse 2
Digital Commons 141 55
DSpace 1305
ePrints 407 5
EUDAT 5
Fedora 41 5
Figshare 5
Greenstone 51 55 5
Invenio 20
Omeka 4 55
SciELO 18 5
WEKO 40 No data
Zenodo
Support for flexible research workflows makes some
repository solutions attractive to smaller institutions
looking for solutions to implement their data manage-
ment workflows. Both DSpace and ePrints, for instance,
are quite common as institutional repositories to man-
age publications, as they oer broad compatibility with
the harvesting protocol OAI-PMH (Open Archives Ini-
tiative Protocol for Metadata Harvesting) [22] and with
preservation guidelines according to the OAIS model.
OAIS requires the existence of dierent packages with
specific purposes, namely SIP (Submission Information
Package), AIP (Archival Information Package) and DIP
(Dissemination Information Package). The OAIS ref-
erence model defines SIP as a representation of pack-
aged items to be deposited in the repository. AIP, on
the other hand, represents the packaged digital objects
within the OAIS-compliant system, and DIP holds one
or several digital artifacts and their representation in-
formation, in such a format that can be interpreted by
potential users.
2.2 Stakeholders in research data management
Several stakeholders are involved in dataset description
throughout the data management workflow, playing an
important part in their management and dissemina-
tion [24,7]. These stakeholders—researchers, research
institutions, curators, harvesters,anddevelopers—play
agoverningroleindeningthemainrequirementsof
adatarepositoryforthemanagementofresearchout-
puts. As key metadata providers, researchers are re-
sponsible for the description of research data. They
are not nec e ss arily knowledgeable in data management
practices, but can provide domain-sp ecific, more or less
formal descriptions to complement generic metadata.
This captures the essential data production context,
making it possible for other researchers to reuse the
data [7]. As data creators, researchers can play a central
role in data deposit by selecting appropriate file formats
for their datasets, preparing their structure and pack-
aging them approp riately [15]. Institutions are also mo-
tivated to have th e ir data recognized and preserved ac-
cording to the requirements of funding institutions [17,
26]. In this regard, institutions value metadata in com-
pliance to standards, which make data ready for in-
clusion in networked environments, therefore increas-
ing their visibility. To make sure that this context is
correctly passed, along with the data, to the preser-
vation stage, curators are mainly interested in main-
taining d ata quality and integrity over time. Usually,
curators are information experts, so it is expected that
their close c ollaboration with researchers can result in
both detailed and compliant metadata records.
Considering data dissemination and reuse, harves-
ters can be either individuals looking for specific data

A comparison of research data management platforms 5
or se rvices which index the content of several reposito-
ries. The se services can make particularly good use of
established protocols, such as the OAI-PMH, to retrieve
metadata from dierent sources and create an interface
to expose the indexed resources . Finally, contributing
to the improvement and expansion of these repositories
over time, developers are concerned with the underly-
ing technologies, an also in having extensive APIs to
promote integration with other tools.
3 Scope of the analysis
The stakeholders in the data management workflow can
greatly influence whether research data is reused. The
selection of platforms in the analysis acknowledges their
role, as well as the importance of the adoption of com-
munity standards to help with data description and ma-
nagement in the long run.
For this comparison, data management platforms
with instances running at both research and govern-
ment institutions have been considered, namely DSpace,
CKAN, Zenodo, Figshare, ePrints, Fedora and EUDAT.
If the long-term preservation of research assets is an
important requirement of the stakeholders in question,
other alternatives such as RODA [30]andArchivemat-
ica may also be considered strong candidates, since th ey
implement comprehensive preservation guidelines not
only for the digital objects themselves but also for their
whole life cycle and associated p rocesses . On one hand,
these platforms have a strong concern with long-term
preservation by strictly following existing standards such
as OAIS, PREMIS or METS, which cover the dier-
ent stages of a long-term preservation workflow. On the
other hand , such solutions are usually harder to install
and maintain by institutions in the so-called long tail of
science—institutions that create large numbers of small
datasets, though do not possess the necessary financial
resources and preservation expertise to support a com-
plete preservation workflow [18].
The Fedora framework
3
is used by some institutions,
and is also under active development, with the recent
release of Fedora 4. The fact that it is designed as a
framework to be fully customized and instantiated, in-
stead of being a “turnkey” solution, places Fedora in a
dierent level, that can not be directly compared with
other solutions. Two open-source examples of Fedora’s
implementations are Hydra
4
and Islandora
5
.Bothare
open-source, capable of handling research workflows,
and u se the best-practices approach already implemen-
3
http://www.fedora-commons.org/
4
http://projecthydra.org/
5
http://islandora.ca/
ted in the core Fedora framework. Although these are
not prese nt in the comparison table, this section will
also consider their strengths, whe n compared to the
other platforms.
An overview of the previous ly identified stakehold-
ers led to the selection of two important dimensions
for the assessment of the platform features: their archi-
tecture and their metadata and dissemination capabil-
ities. The former includes aspects such as how they are
deployed into a production environment, the locations
where they keep their data, whethe r their source code
is available, and other aspects that are related to the
compliance with preservation best practices. The latter
focuses on how resource-related metadata is handled
and the level of compliance of these records with es-
tablished standards and exchange protocols. Other im-
portant aspects are their adoption within the research
communities and the availability of support for exten-
sions. Table 2 shows an overview of the results of our
evaluation.
4 Platform comparison
Based on the selection of the evaluation scope, this
section addresses the comparison of the platforms ac-
cording to key features that can help in the selection
of a platform for data management. Table 2 groups
these features in two categories: (i) Architecture, for
structural-related characteristics; and (ii) Metadata and
dissemination, for those related to flexible description
and interoperability. This analysis is guided by the use
cases in the research data management environment.
4.1 Architecture
Regarding the architecture of the platforms, several as-
pects are considered. From the point of view of a re-
search institution, a quick and simple deployment of
the selected platform is an important aspect. There are
two main scenarios: the institution can either outsource
an external service or install and customize its own
repository, supporting the infrastructure maintenance
costs. Contracting a service provided by a dedicated
company such as Figshare or Zenodo delegates platform
maintenance for a fee. The service-based approach may
not be viable in some scenarios, as some researchers or
institutions may be reluctant to deposit their data in
aplatformoutsidetheircontrol[11]. DSpace, ePrints,
CKAN or any Fedora-based solution can be installed
and run completely under the control of the research
institution an d therefore oer a better control over the
stored data. As open-source solutions, they also have

Citations
More filters
01 Jan 2013
TL;DR: Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: to reproduce or to verify research, to make results of publicly funded research available to the public, to enable others to ask new questions of extant data, and to advance the state of research and innovation.
Abstract: We must all accept that science is data and that data are science, and thus provide for, and justify the need for the support of, much-improved data curation. (Hanson, Sugden, & Alberts) Researchers are producing an unprecedented deluge of data by using new methods and instrumentation. Others may wish to mine these data for new discoveries and innovations. However, research data are not readily available as sharing is common in only a few fields such as astronomy and genomics. Data sharing practices in other fields vary widely. Moreover, research data take many forms, are handled in many ways, using many approaches, and often are difficult to interpret once removed from their initial context. Data sharing is thus a conundrum. Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: (1) to reproduce or to verify research, (2) to make results of publicly funded research available to the public, (3) to enable others to ask new questions of extant data, and (4) to advance the state of research and innovation. These rationales differ by the arguments for sharing, by beneficiaries, and by the motivations and incentives of the many stakeholders involved. The challenges are to understand which data might be shared, by whom, with whom, under what conditions, why, and to what effects. Answers will inform data policy and practice. © 2012 Wiley Periodicals, Inc.

634 citations

Journal ArticleDOI
01 Jan 2018
TL;DR: The current panorama of data citation is many-faceted and an overall view that brings together diverse aspects of this topic is still missing as discussed by the authors, however, this paper aims to describe the lay of the land for data citation, both from the theoretical the why and what and the practical the how angle.
Abstract: Citations are the cornerstone of knowledge propagation and the primary means of assessing the quality of research, as well as directing investments in science. Science is increasingly becoming "data-intensive," where large volumes of data are collected and analyzed to discover complex patterns through simulations and experiments, and most scientific reference works have been replaced by online curated data sets. Yet, given a data set, there is no quantitative, consistent, and established way of knowing how it has been used over time, who contributed to its curation, what results have been yielded, or what value it has. The development of a theory and practice of data citation is fundamental for considering data as first-class research objects with the same relevance and centrality of traditional scientific products. Many works in recent years have discussed data citation from different viewpoints: illustrating why data citation is needed, defining the principles and outlining recommendations for data citation systems, and providing computational methods for addressing specific issues of data citation. The current panorama is many-faceted and an overall view that brings together diverse aspects of this topic is still missing. Therefore, this paper aims to describe the lay of the land for data citation, both from the theoretical the why and what and the practical the how angle.

83 citations

Journal ArticleDOI
TL;DR: RO-Crate as mentioned in this paper is an open, community-driven, and lightweight approach to packaging research artefacts along with their metadata in a machine readable manner, aiming to establish best practices to formally describe metadata in an accessible and practical way for their use in a wide variety of situations.
Abstract: An increasing number of researchers support reproducibility by including pointers to and descriptions of datasets, software and methods in their publications. However, scientific articles may be ambiguous, incomplete and difficult to process by automated systems. In this paper we introduce RO-Crate, an open, community-driven, and lightweight approach to packaging research artefacts along with their metadata in a machine readable manner. RO-Crate is based on Schema$.$org annotations in JSON-LD, aiming to establish best practices to formally describe metadata in an accessible and practical way for their use in a wide variety of situations. An RO-Crate is a structured archive of all the items that contributed to a research outcome, including their identifiers, provenance, relations and annotations. As a general purpose packaging approach for data and their metadata, RO-Crate is used across multiple areas, including bioinformatics, digital humanities and regulatory sciences. By applying "just enough" Linked Data standards, RO-Crate simplifies the process of making research outputs FAIR while also enhancing research reproducibility. An RO-Crate for this article is available at https://w3id.org/ro/doi/10.5281/zenodo.5146227

28 citations

Journal ArticleDOI
01 Jul 2020-PLOS ONE
TL;DR: It was found that researchers’ assumptions about effort required during the data preparation process were diminished by awareness of e-science technologies, which also increased their tendency to perceive personal benefits via data exchange.
Abstract: Background E-science technologies have significantly increased the availability of data. Research grant providers such as the European Union increasingly require open access publishing of research results and data. However, despite its significance to research, the adoption rate of open data technology remains low across all disciplines, especially in Europe where research has primarily focused on technical solutions (such as Zenodo or the Open Science Framework) or considered only parts of the issue. Methods and findings In this study, we emphasized the non-technical factors perceived value and uncertainty factors in the context of academia, which impact researchers' acceptance of open data-the idea that researchers should not only publish their findings in the form of articles or reports, but also share the corresponding raw data sets. We present the results of a broad quantitative analysis including N = 995 researchers from 13 large to medium-sized universities in Germany. In order to test 11 hypotheses regarding researchers' intentions to share their data, as well as detect any hierarchical or disciplinary differences, we employed a structured equation model (SEM) following the partial least squares (PLS) modeling approach. Conclusions Grounded in the value-based theory, this article proclaims that most individuals in academia embrace open data when the perceived advantages outweigh the disadvantages. Furthermore, uncertainty factors impact the perceived value (consisting of the perceived advantages and disadvantages) of sharing research data. We found that researchers' assumptions about effort required during the data preparation process were diminished by awareness of e-science technologies (such as Zenodo or the Open Science Framework), which also increased their tendency to perceive personal benefits via data exchange. Uncertainty factors seem to influence the intention to share data. Effects differ between disciplines and hierarchical levels.

25 citations


Cites background from "A comparison of research data manag..."

  • ...Although several studies have promoted the benefits of open data, the latest research demonstrates a low willingness to share data across those platforms [8,9]....

    [...]

  • ...Considering the infrastructures and needs of European universities and research institutions, research has primarily examined researchers’ technical requirements and expectations towards technology [8]....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors present a research data infrastructure for materials science, extending and combining the features of an electronic lab notebook and a repository, which can be used throughout the entire research process.
Abstract: The concepts and current developments of a research data infrastructure for materials science are presented, extending and combining the features of an electronic lab notebook and a repository. The objective of this infrastructure is to incorporate the possibility of structured data storage and data exchange with documented and reproducible data analysis and visualization, which finally leads to the publication of the data. This way, researchers can be supported throughout the entire research process. The software is being developed as a web-based and desktop-based system, offering both a graphical user interface and a programmatic interface. The focus of the development is on the integration of technologies and systems based on both established as well as new concepts. Due to the heterogeneous nature of materials science data, the current features are kept mostly generic, and the structuring of the data is largely left to the users. As a result, an extension of the research data infrastructure to other disciplines is possible in the future. The source code of the project is publicly available under a permissive Apache 2.0 license.

23 citations

References
More filters
Journal ArticleDOI
TL;DR: A tiered metadata system of knowledge, information and processing where each in turn addresses a) discovery, indexing and citation, b) context and access to additional information and c) content access and manipulation is proposed.
Abstract: In order to exploit the vast body of currently inaccessible chemical information held in Electronic Laboratory Notebooks (ELNs) it is necessary not only to make it available but also to develop protocols for discovery, access and ultimately automatic processing. An aim of the Dial-a-Molecule Grand Challenge Network is to be able to draw on the body of accumulated chemical knowledge in order to predict or optimize the outcome of reactions. Accordingly the Network drew up a working group comprising informaticians, software developers and stakeholders from industry and academia to develop protocols and mechanisms to access and process ELN records. The work presented here constitutes the first stage of this process by proposing a tiered metadata system of knowledge, information and processing where each in turn addresses a) discovery, indexing and citation b) context and access to additional information and c) content access and manipulation. A compact set of metadata terms, called the elnItemManifest, has been derived and caters for the knowledge layer of this model. The elnItemManifest has been encoded as an XML schema and some use cases are presented to demonstrate the potential of this approach.

31 citations


"A comparison of research data manag..." refers background in this paper

  • ...The growth in the number of research publications, combined with a strong drive towards open-access policies [8, 10], continues to foster the development of opensource platforms for managing bibliographic records....

    [...]

Journal ArticleDOI
TL;DR: SciRepos interface with the ICT services of research infrastructures to intercept and publish research products while providing researchers with social networking tools for discovery, notification, sharing, discussion, and assessment of research products.
Abstract: Information and communication technology (ICT) advances in research infrastructures are continuously changing the way research and scientific communication are performed. Scientists, funders, and organizations are moving the paradigm of "research publishing" well beyond traditional articles. The aim is to pursue an holistic approach where publishing includes any product (e.g. publications, datasets, experiments, software, web sites, blogs) resulting from a research activity and relevant to the interpretation, evaluation, and reuse of the activity or part of it. The implementation of this vision is today mainly inspired by literature scientific communication workflows, which separate the "where" research is conducted from the "where" research is published and shared. In this paper we claim that this model cannot fit well with scientific communication practice envisaged in Science 2.0 settings. We present the idea of Science 2.0 Repositories (SciRepos), which meet publishing requirements arising in Science 2.0 by blurring the distinction between research life-cycle and research publishing. SciRepos interface with the ICT services of research infrastructures to intercept and publish research products while providing researchers with social networking tools for discovery, notification, sharing, discussion, and assessment of research products.

28 citations

Book ChapterDOI
25 May 2014
TL;DR: The first prototype of the Dendro platform is presented, designed to help researchers use concepts from domain-specific ontologies to collaboratively describe and share datasets within their groups.
Abstract: Research datasets in the so-called “long-tail of science” are easily lost after their primary use. Support for preservation, if available, is hard to fit in the research agenda. Our previous work has provided evidence that dataset creators are motivated to spend time on data description, especially if this also facilitates data exchange within a group or a project. This activity should take place early in the data generation process, when it can be regarded as an actual part of data creation. We present the first prototype of the Dendro platform, designed to help researchers use concepts from domain-specific ontologies to collaboratively describe and share datasets within their groups. Unlike existing solutions, ontologies are used at the core of the data storage and querying layer, enabling users to establish meaningful domain-specific links between data, for any domain. The platform is currently being tested with research groups from the University of Porto.

24 citations

01 Jan 2009

24 citations


"A comparison of research data manag..." refers background in this paper

  • ...Institutions are also motivated to have their data recognized and preserved according to the requirements of funding institutions [16, 25]....

    [...]

Book
01 Nov 2014

24 citations


"A comparison of research data manag..." refers background in this paper

  • ...Several comparative studies between existing solutions were already carried out in order to evaluate different aspects of each implementation, confirming that this is an issue with increasing importance [3, 6, 15]....

    [...]

Related Papers (5)
Frequently Asked Questions (1)
Q1. What are the contributions in this paper?

This paper is a synthetic overview of current platforms that can be used for data management purposes. Adopting a pragmatic view on data management, the paper focuses on solutions that can be adopted in the longtail of science, where investments in tools and manpower are modest. First, a broad set of data management platforms is presented—some designed for institutional repositories and digital libraries—to select a short list of the more promising ones for data management. This paper is an extended version of a previously published comparative study. The results show that there is still plenty of room for improvement, mainly regarding the specificity of data description in different domains, as well as the potential for integration of the data management platforms with existing research management tools.