scispace - formally typeset

Journal ArticleDOI

A comparison of research data management platforms: architecture, flexible metadata and interoperability

01 Nov 2017-Universal Access in The Information Society (Springer Berlin Heidelberg)-Vol. 16, Iss: 4, pp 851-862

TL;DR: A synthetic overview of current platforms that can be used for data management purposes and shows that there is still plenty of room for improvement, mainly regarding the specificity of data description in different domains, as well as the potential for integration of the data management platforms with existing research management tools.

AbstractResearch data management is rapidly becoming a regular concern for researchers, and institutions need to provide them with platforms to support data organization and preparation for publication. Some institutions have adopted institutional repositories as the basis for data deposit, whereas others are experimenting with richer environments for data description, in spite of the diversity of existing workflows. This paper is a synthetic overview of current platforms that can be used for data management purposes. Adopting a pragmatic view on data management, the paper focuses on solutions that can be adopted in the long tail of science, where investments in tools and manpower are modest. First, a broad set of data management platforms is presented—some designed for institutional repositories and digital libraries—to select a short list of the more promising ones for data management. These platforms are compared considering their architecture, support for metadata, existing programming interfaces, as well as their search mechanisms and community acceptance. In this process, the stakeholders’ requirements are also taken into account. The results show that there is still plenty of room for improvement, mainly regarding the specificity of data description in different domains, as well as the potential for integration of the data management platforms with existing research management tools. Nevertheless, depending on the context, some platforms can meet all or part of the stakeholders’ requirements.

Topics: Data management (62%), Digital firm (62%), Metadata (57%), Workflow (52%), Interoperability (52%)

Summary (2 min read)

1 Introduction

  • The number of published scholarly papers is steadily increasing, and there is a growing awareness of the importance, diversity and complexity of data generated in research contexts [25].
  • Implementation costs, architecture, interoperability, content dissemination capabilities, implemented search features and community acceptance are also taken into consideration.
  • This evaluation considers aspects relevant to the authors’ ongoing work, focused on finding solutions to research data management, and takes into consideration their past experience in this field [33].
  • Moreover, the authors look at staging platforms, which are especially tailored to capture metadata records as they are produced, offering researchers an integrated environment for their management along with the data.
  • As datasets become organised and described, their value and their potential for reuse will prompt further preservation actions.

3 Scope of the analysis

  • The stakeholders in the data management workflow can greatly influence whether research data is reused.
  • The selection of platforms in the analysis acknowledges their role, as well as the importance of the adoption of community standards to help with data description and management in the long run.
  • On the other hand, such solutions are usually harder to install and maintain by institutions in the so-called long tail of science—institutions that create large numbers of small datasets, though do not possess the necessary financial resources and preservation expertise to support a complete preservation workflow [18].
  • The Fedora framework3 is used by some institutions, and is also under active development, with the recent release of Fedora 4.
  • The former includes aspects such as how they are deployed into a production environment, the locations where they keep their data, whether their source code is available, and other aspects that are related to the compliance with preservation best practices.

4 Platform comparison

  • Based on the selection of the evaluation scope, this section addresses the comparison of the platforms according to key features that can help in the selection of a platform for data management.
  • Adopting a dynamic approach to data management, tasks can be made easier for the researchers, and motivate them to use the data management platform as part of their daily research activities, while they are working on the data.
  • This platform is flexible, available under an open-source license, and compatible with several metadata representations, while still providing a complete API.
  • While the evaluated platforms have different description requirements upon deposit, most of them lack the support for domainspecific metadata schemas.
  • This search feature makes it easier for researchers to find the datasets that are from relevant domains and belong to specific collections or similar dataset categories (the concept varies between platforms as they have different organizational structures).

5 Data staging platforms

  • Most of the analyzed solutions target data repositories, i.e. the end of the research workflow.
  • These requirements have been identified by several research and data management institutions, who have implemented integrated solutions for researchers to manage data not only when it is created, but also throughout the entire research workflow.
  • It provides researchers with 20GB of storage for free, and is integrated with other modules for dataset sharing and staging, including some computational processing on the stored data.
  • Dendro is a single solution targeted at improving the overall availability and quality of research data.
  • Curators can expand the platform’s data model by loading ontologies that specify domain-specific or generic metadata descriptors that can then be used by researchers in their projects.

6 Conclusion

  • The evaluation showed that it can be hard to select a platform without first performing a careful study of the requirements of all stakeholders.
  • Its features and the extensive API making it also possible to use this repository to manage research data, making use of its keyvalue dictionary to store any domain-level descriptors.
  • A very important factor to consider is also the control over where the data is stored.
  • The authors consider that these solutions should be compared to other collaborative solutions such as Dendro, a research data mana- gement solution currently under development.
  • This should, of course, be done while taking into consideration available metadata standards that can contribute to overall better conditions for long-term preservation [36].

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Noname manuscript No.
(will be inserted by the editor)
Acomparisonofresearchdatamanagementplatforms
Architecture, flexible metadata and interoperability
Ricardo Carvalho Amorim, João Aguiar Castro, João Rocha da Silva,
Cristina Ribeiro
Received: date / Accepted: date
Abstract Research data management is rapidly be-
coming a regular concern for researchers, and institu-
tions need to provide them with platforms to support
data organization and preparation for publication. Some
institutions have adopted institutional repositories as
the basis for data deposit, whereas others are experi-
menting with richer environments for data description,
in spite of the diversity of existing workflows. This pa-
per is a synthetic overview of current platforms that
can be used for data management purposes. Adopt-
ing a pragmatic view on data management, the paper
focuses on solutions that can be adopted in the long-
tail of science, where investments in tools and man -
power are modest. First, a broad set of data mana-
gement platforms is presented—some designed for in-
stitutional repositories and digital libraries—to select
ashortlistofthemorepromisingonesfordatama-
nagement. These platforms are compared considering
This paper is an extended version of a previously published
comparative study. Please refer to the WCIST 2015 confer-
ence proceedings (doi: 10.1007/978-3-319-16486-1)
Ricardo C arvalho Amorim
INESC TEC—Faculdade de Eng enharia da Universidade do
Porto
E-mail: ricardo.amorim3@gmail.com
João Aguiar Castro
INESC TEC—Faculdade de Engenharia da Universidade do
Porto
E-mail: joaoaguiarcastro@gmail.com
João Rocha da Silva
INESC TEC—Faculdade de Engenharia da Universidade do
Porto
E-mail: joaorosilva@gmail.com
Cristina R ibeiro
INESC TEC—Faculdade de Engenharia da Universidade do
Porto
E-mail: mcr@fe.up.pt
their architecture, support for metadata, existing pro-
gramming interfaces, as well as their search mechanisms
and community acceptance. In this pro c e ss, the stake-
holders’ requirements are also taken into account. The
results show that there is still plenty of room for im -
provement, mainly regarding the specificity of data de-
scription in dierent domains, as well as the potential
for integration of the data management platforms with
existing research management tools. Nevertheless, de-
pending on the context, some platforms can meet all or
part of the stakeholders’ requirements.
1 Introduction
The number of published scholarly papers is steadily
increasing, and there is a growing awareness of the im-
portance, diversity and complexity of data generated
in research contexts [25]. The management of these as-
sets is currently a concern for both researchers and in-
stitutions who have to streamline scholarly communi-
cation, while keeping record of research contributions
and ensuring the correct licensing of their contents [23,
18]. At the same time, academic institutions have new
mandates, requiring data management activities to be
carried out during the research projects, as a part of
research grant contracts [14,26]. These activities are
invariably supported by software platforms, increasing
the demand for such infrastructure s .
This paper presents an overview of several promi-
nent research data management platforms that can be
put in place by an institution to support part of its
research data management workflow. It starts by iden-
tifying a set of well known repositories that are cur-
rently being used for either publications or data ma-
nagement, discussing their use in several research in-
This is a post-peer-review, pre-copyedit version of an article published in
Universal Access in the Information Society. The final authenticated version is available online at:
https://doi.org/10.1007/s10209-016-0475-y

2 Ricardo C arvalho Amorim, João Aguiar Castro, João Rocha da Silva, Cristina Ribeiro
stitutions. Then, focus moves to their fitne ss to han-
dle research data, namely their domain-specific meta-
data requirements and preservation guidelines. Imple-
mentation costs, architecture, interoperability, content
dissemination capabilities, implemented search features
and community acceptance are also taken into consider-
ation. When faced with the many alternatives currently
av ailable, it can be dicult for institutions to choose a
suitable platform to meet their specific requirements.
Several comparative studies between existing solutions
were already carried out in order to evaluate dierent
aspects of each implementation, confirming that this is
an issue with increasing importance [16,3,6]. This eval-
uation considers aspects relevant to the authors’ ongo-
ing work, focused on finding solutions to research data
management, and takes into consideration their past ex-
perience in this field [33]. This experience has provided
insights on specific, local needs that can influence the
adoption of a platform and therefore the success in its
deployment.
It is clear that the eort in creating metadata for
research datasets is very dierent from what is required
for research publications. While publications can be ac-
curately described by librarians, good quality metadata
for a dataset requires the contribution of the researchers
involved in its production. Their knowledge of the do-
main is required to adequately document the dataset
production context so that others can reuse it. Involv-
ing the researchers in the deposit stage is a challenge, as
the investment in metadata production for data publi-
cation and sharing is typically higher than that required
for the addition of notes that are only intended for their
peers in a research group [7].
Moreover, the authors look at staging platforms,
which are especially tailored to capture metadata re-
cords as they are produced, oering researchers an in-
tegrated environment for their manage m ent along with
the data. As this is an area with several proposals in
active development, EUDAT, which includes tools for
data staging, and Dendro, a platform proposed for en-
gaging researchers in data description, taking into ac-
count the need for data and metadata organisation will
be contemplated.
Staging platforms are capable of exporting the en-
closed datasets and metadata records to research data
repositories. The platforms selected for the analysis in
the sequel as candidates for u s e are considered as re-
search data management repositories for datasets in
the long tail of science , as they are designed with shar-
ing and dissemination in mind. Together, staging plat-
forms and research data repositories provide the tools to
handle the stages of the research workflow. Long-term
preservation imposes further requirements, and other
tools may be necessary to satisfy th e m. However, as da-
tasets become organised and described, their value and
their potential for reuse will prompt further preserva-
tion actions.
2 From publications to data management
The growth in the number of research publications,
combined with a strong drive towards open access poli-
cies [8,10], continue to foster the development of open-
source platforms for managing bibliographic records.
While data citation is not yet a widespread practice, the
importance of citable datasets is growing. Until a cul-
ture of data citation is widely adopted, however, many
research groups are opting to pu blish so-called “data
papers”, which are more easily citable than datasets.
Data pape rs serve not only as a reference to datasets
but also document their production context [9].
As data management becomes an increasin gly im-
portant part of the research workflow [24], solutions de-
signed for managing research data are being actively
developed by both open-source communities and data
management-related companies. As with institutional
repositories, many of their design and development chal-
lenges have to do with description and long-term preser-
vation of research data. There are, however, at least
two fundamental dierences between publications and
datasets: the latter are often purely numeric, making
it very hard to derive any type of metadata by sim-
ply looking at their contents; also, datasets require de-
tailed, domain-specific des c riptions to be corre ctly in-
terpreted. Metadata requ ire ments can also vary greatly
from domain to domain, requiring repository data mod-
els to be flexible enough to adequately represent these
records [35]. The eort invested in adequate dataset
description is worthwhile, since it has been shown that
research publications that provide access to their base
data consistently yield higher citation rates than those
that do not [27].
As these rep ositories deal with a reasonably small
set of managed formats for deposit, several reference
models, such as the OAIS (Open Archival Information
System) [12]arecurrentlyinusetoensurepreservation
and to promote metadata interchange and dissemina-
tion. Besides capturing the available metadata during
the ingestion process, data re positories often distribute
this information to other instances, improving the pub-
lications’ visibility through specialised research search
engines or repository indexers. While the former focus
on querying each repository f or exposed contents, the
latter help users find data repositories that match their
needs—such as repositories from a specific domain or
storing data from a specific community. Governmental

A comparison of research data management platforms 3
institutions are also promoting the d isclosure of open
data to improve citizen commitment and government
transparency, and this motivates the use of data mana-
gement platforms in this context.
2.1 An overview on existing repositories
While depositing and accessing publications from dif-
ferent domains is already possible in most institutions,
ensuring the same level of accessibility to data resources
is s till challenging, and d i erent solutions are being ex-
perimented to expose and share data in some communi-
ties. Addressing this issue, we synthesize a preliminary
classification of these solutions according to their spe-
cific purpose: they are either targeting staging, early
research activities or managing deposited datasets and
making them available to the community.
Table 1 identifies features of the selected platforms
that may render them convenient for data management.
To build the table, the authors resorted to the docu-
mentation of the platforms, and to basic experiments
with demonstration instances, whenever available. In
the firs t column, under “Registered repositories”, is the
number of running instances of each platform, accord-
ing to the OpenDOAR platform as of mid-October 2015.
In the analysis, five evaluation criteria that can be
relevant for an institution to make a coarse-grained as-
sessment of the solutions are considered. Some exist-
ing tools were excluded from this first analysis, mainly
because some of their characteristics place them out-
side of the scope of this work. This is the case of plat-
forms specifically targeting research publications (and
that cannot be easily modified for managing data), and
heavy-weight platforms targeted at long-term preserva-
tion. Also excluded were those that, from a technical
point of view, do not comply with de sirable require-
ments for this domain such as adopting an open-source
approach, or providing access to th eir features via com-
prehensive APIs.
By comparing the number of existing installations,
it is natural to assume that a large number of instance s
for a platform is a goo d indication of the existence of
support for its implementation. Repositories such as
DSpace are widely used among institutions to manage
publications. Therefore, institutions using DSpace to
manage publications can use their support for the plat-
form to expand or replicate the repository and meet
additional requirements.
It is important to mention that some repositories
do not implement interfaces with existing repository
indexers, and this may cause the OpenDOAR statistics
to show a value lower than the actual number of e xis ting
installations. More over, services provided by EUDAT,
Figshare and Zenodo, for instance, consis t of a single
installation that receives all the deposited data, rather
than a distributed array of manageable ins tallation s.
Government-supported platforms such as CKAN are
currently being used as part of the open government ini-
tiatives in several countries, allowing the disclosure of
data related to sensitive issues such as budget execu-
tion, and their aim is to vouch f or transparency and
credibility towards tax payers [ 21,20]. Although not
specifically tailored to meet research data management
requirements, these data-focus ed repositories also count
with an increasing number of instances supporting com-
plex research data management workflows [38], even at
universities
1
.
Access to the source code can also be a valuable cri-
terion for selecting a platform, primarily to avoid ven-
dor lock-in, which is usually associated with commer-
cial software or other provided services. Vendor lock-
in is undesirable from a preservation point of view as
it places th e maintenance of the platform (and conse-
quently the data stored insid e) in the hands of a single
vendor, that may not be able to provide support indef-
initely. The availability of the a platform’s source code
also allows additional modifications to be carried out
in order to create customized workflows—examples in-
clude improved metadata capabilities and data brows-
ing functionalities. Commercial solutions such as Con-
tentDM may incur high costs for the subscription fees,
which can make them cost-prohibitive for non-profit or-
ganizations or small research institutions. In some cases
only a small portion of the source code for the entire
solution is actually available to the public. This is the
case with EUDAT, where only the B2Share modu le is
currently open
2
—the re main in g modules are unavail-
able to date.
From an integration point of view, the existence of
an API can allow for further development and help with
the repository maintenance, as the software ages. Solu-
tions that do not, at least partially, comply with this
requirement, may hinder the integration with external
platforms to improve the visibility of existing contents.
The lack of an API creates a barrier to the development
of tools to support a platform in specific environments,
such as laboratories that frequently produce data to
be directly deposited and disclosed. Finally, regarding
long-term preservation, some platforms fail to provide
unique identifiers for the resources upon deposit, mak-
ing persistent references to data and data citation in
publications hard.
1
http://ckan.org/2013/11/28/ckan4rdm-st-andrews/
2
Source code repository for B2Share is hoste d via GitHub
at https://github.com/EUDAT-B2SHARE/b2share

4 Ricardo C arvalho Amorim, João Aguiar Castro, João Rocha da Silva, Cristina Ribeiro
Table 1: Limitations of the identified repository solutions. Source:
5
OpenDOAR platform
4
Corresponding web-
site.
Only available through additional plug-ins.
Only partially.
Registered
rep osito ries
5
Closed
source
No
API
No unique
identifiers
Complex
installation or setup
No OAI-PMH
compliance
CKAN 139
4
5
5
ContentDM 53 5
Dataverse 2
Digital Commons 141 55
DSpace 1305
ePrints 407 5
EUDAT 5
Fedora 41 5
Figshare 5
Greenstone 51 55 5
Invenio 20
Omeka 4 55
SciELO 18 5
WEKO 40 No data
Zenodo
Support for flexible research workflows makes some
repository solutions attractive to smaller institutions
looking for solutions to implement their data manage-
ment workflows. Both DSpace and ePrints, for instance,
are quite common as institutional repositories to man-
age publications, as they oer broad compatibility with
the harvesting protocol OAI-PMH (Open Archives Ini-
tiative Protocol for Metadata Harvesting) [22] and with
preservation guidelines according to the OAIS model.
OAIS requires the existence of dierent packages with
specific purposes, namely SIP (Submission Information
Package), AIP (Archival Information Package) and DIP
(Dissemination Information Package). The OAIS ref-
erence model defines SIP as a representation of pack-
aged items to be deposited in the repository. AIP, on
the other hand, represents the packaged digital objects
within the OAIS-compliant system, and DIP holds one
or several digital artifacts and their representation in-
formation, in such a format that can be interpreted by
potential users.
2.2 Stakeholders in research data management
Several stakeholders are involved in dataset description
throughout the data management workflow, playing an
important part in their management and dissemina-
tion [24,7]. These stakeholders—researchers, research
institutions, curators, harvesters,anddevelopers—play
agoverningroleindeningthemainrequirementsof
adatarepositoryforthemanagementofresearchout-
puts. As key metadata providers, researchers are re-
sponsible for the description of research data. They
are not nec e ss arily knowledgeable in data management
practices, but can provide domain-sp ecific, more or less
formal descriptions to complement generic metadata.
This captures the essential data production context,
making it possible for other researchers to reuse the
data [7]. As data creators, researchers can play a central
role in data deposit by selecting appropriate file formats
for their datasets, preparing their structure and pack-
aging them approp riately [15]. Institutions are also mo-
tivated to have th e ir data recognized and preserved ac-
cording to the requirements of funding institutions [17,
26]. In this regard, institutions value metadata in com-
pliance to standards, which make data ready for in-
clusion in networked environments, therefore increas-
ing their visibility. To make sure that this context is
correctly passed, along with the data, to the preser-
vation stage, curators are mainly interested in main-
taining d ata quality and integrity over time. Usually,
curators are information experts, so it is expected that
their close c ollaboration with researchers can result in
both detailed and compliant metadata records.
Considering data dissemination and reuse, harves-
ters can be either individuals looking for specific data

A comparison of research data management platforms 5
or se rvices which index the content of several reposito-
ries. The se services can make particularly good use of
established protocols, such as the OAI-PMH, to retrieve
metadata from dierent sources and create an interface
to expose the indexed resources . Finally, contributing
to the improvement and expansion of these repositories
over time, developers are concerned with the underly-
ing technologies, an also in having extensive APIs to
promote integration with other tools.
3 Scope of the analysis
The stakeholders in the data management workflow can
greatly influence whether research data is reused. The
selection of platforms in the analysis acknowledges their
role, as well as the importance of the adoption of com-
munity standards to help with data description and ma-
nagement in the long run.
For this comparison, data management platforms
with instances running at both research and govern-
ment institutions have been considered, namely DSpace,
CKAN, Zenodo, Figshare, ePrints, Fedora and EUDAT.
If the long-term preservation of research assets is an
important requirement of the stakeholders in question,
other alternatives such as RODA [30]andArchivemat-
ica may also be considered strong candidates, since th ey
implement comprehensive preservation guidelines not
only for the digital objects themselves but also for their
whole life cycle and associated p rocesses . On one hand,
these platforms have a strong concern with long-term
preservation by strictly following existing standards such
as OAIS, PREMIS or METS, which cover the dier-
ent stages of a long-term preservation workflow. On the
other hand , such solutions are usually harder to install
and maintain by institutions in the so-called long tail of
science—institutions that create large numbers of small
datasets, though do not possess the necessary financial
resources and preservation expertise to support a com-
plete preservation workflow [18].
The Fedora framework
3
is used by some institutions,
and is also under active development, with the recent
release of Fedora 4. The fact that it is designed as a
framework to be fully customized and instantiated, in-
stead of being a “turnkey” solution, places Fedora in a
dierent level, that can not be directly compared with
other solutions. Two open-source examples of Fedora’s
implementations are Hydra
4
and Islandora
5
.Bothare
open-source, capable of handling research workflows,
and u se the best-practices approach already implemen-
3
http://www.fedora-commons.org/
4
http://projecthydra.org/
5
http://islandora.ca/
ted in the core Fedora framework. Although these are
not prese nt in the comparison table, this section will
also consider their strengths, whe n compared to the
other platforms.
An overview of the previous ly identified stakehold-
ers led to the selection of two important dimensions
for the assessment of the platform features: their archi-
tecture and their metadata and dissemination capabil-
ities. The former includes aspects such as how they are
deployed into a production environment, the locations
where they keep their data, whethe r their source code
is available, and other aspects that are related to the
compliance with preservation best practices. The latter
focuses on how resource-related metadata is handled
and the level of compliance of these records with es-
tablished standards and exchange protocols. Other im-
portant aspects are their adoption within the research
communities and the availability of support for exten-
sions. Table 2 shows an overview of the results of our
evaluation.
4 Platform comparison
Based on the selection of the evaluation scope, this
section addresses the comparison of the platforms ac-
cording to key features that can help in the selection
of a platform for data management. Table 2 groups
these features in two categories: (i) Architecture, for
structural-related characteristics; and (ii) Metadata and
dissemination, for those related to flexible description
and interoperability. This analysis is guided by the use
cases in the research data management environment.
4.1 Architecture
Regarding the architecture of the platforms, several as-
pects are considered. From the point of view of a re-
search institution, a quick and simple deployment of
the selected platform is an important aspect. There are
two main scenarios: the institution can either outsource
an external service or install and customize its own
repository, supporting the infrastructure maintenance
costs. Contracting a service provided by a dedicated
company such as Figshare or Zenodo delegates platform
maintenance for a fee. The service-based approach may
not be viable in some scenarios, as some researchers or
institutions may be reluctant to deposit their data in
aplatformoutsidetheircontrol[11]. DSpace, ePrints,
CKAN or any Fedora-based solution can be installed
and run completely under the control of the research
institution an d therefore oer a better control over the
stored data. As open-source solutions, they also have

Citations
More filters

01 Jan 2013
TL;DR: Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: to reproduce or to verify research, to make results of publicly funded research available to the public, to enable others to ask new questions of extant data, and to advance the state of research and innovation.
Abstract: We must all accept that science is data and that data are science, and thus provide for, and justify the need for the support of, much-improved data curation. (Hanson, Sugden, & Alberts) Researchers are producing an unprecedented deluge of data by using new methods and instrumentation. Others may wish to mine these data for new discoveries and innovations. However, research data are not readily available as sharing is common in only a few fields such as astronomy and genomics. Data sharing practices in other fields vary widely. Moreover, research data take many forms, are handled in many ways, using many approaches, and often are difficult to interpret once removed from their initial context. Data sharing is thus a conundrum. Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: (1) to reproduce or to verify research, (2) to make results of publicly funded research available to the public, (3) to enable others to ask new questions of extant data, and (4) to advance the state of research and innovation. These rationales differ by the arguments for sharing, by beneficiaries, and by the motivations and incentives of the many stakeholders involved. The challenges are to understand which data might be shared, by whom, with whom, under what conditions, why, and to what effects. Answers will inform data policy and practice. © 2012 Wiley Periodicals, Inc.

575 citations


Journal ArticleDOI
01 Jan 2018
Abstract: Citations are the cornerstone of knowledge propagation and the primary means of assessing the quality of research, as well as directing investments in science. Science is increasingly becoming "data-intensive," where large volumes of data are collected and analyzed to discover complex patterns through simulations and experiments, and most scientific reference works have been replaced by online curated data sets. Yet, given a data set, there is no quantitative, consistent, and established way of knowing how it has been used over time, who contributed to its curation, what results have been yielded, or what value it has. The development of a theory and practice of data citation is fundamental for considering data as first-class research objects with the same relevance and centrality of traditional scientific products. Many works in recent years have discussed data citation from different viewpoints: illustrating why data citation is needed, defining the principles and outlining recommendations for data citation systems, and providing computational methods for addressing specific issues of data citation. The current panorama is many-faceted and an overall view that brings together diverse aspects of this topic is still missing. Therefore, this paper aims to describe the lay of the land for data citation, both from the theoretical the why and what and the practical the how angle.

68 citations


Proceedings ArticleDOI
22 Jul 2018
TL;DR: Some of the challenges encountered in designing and developing a system that can be easily adapted to different scientific areas are discussed, including support for large amounts of data, horizontal scaling of domain specific preprocessing algorithms, and ability to provide new data visualizations in the web browser.
Abstract: Clowder is an open source data management system to support data curation of long tail data and metadata across multiple research domains and diverse data types. Institutions and labs can install and customize their own instance of the framework on local hardware or on remote cloud computing resources to provide a shared service to distributed communities of researchers. Data can be ingested directly from instruments or manually uploaded by users and then shared with remote collaborators using a web front end. We discuss some of the challenges encountered in designing and developing a system that can be easily adapted to different scientific areas including digital preservation, geoscience, material science, medicine, social science, cultural heritage and the arts. Some of these challenges include support for large amounts of data, horizontal scaling of domain specific preprocessing algorithms, ability to provide new data visualizations in the web browser, a comprehensive Web service API for automatic data ingestion and curation, a suite of social annotation and metadata management features to support data annotation by communities of users and algorithms, and a web based front-end to interact with code running on heterogeneous clusters, including HPC resources.

17 citations


Cites background from "A comparison of research data manag..."

  • ...CKAN1 is an open source solution that each institution can deploy independently [6], it is easily extensible through its API....

    [...]

  • ...In B2Share [7] its possible to setup custom metadata entries when selecting a project where the dataset will be uploaded into [6]....

    [...]


DissertationDOI
05 Jul 2018
TL;DR: This work proposes a data-centric approach to semantically describe processes as data flows: the Datanode ontology, which comprises a hierarchy of the possible relations between data objects and shows how these components can be designed, how they can be effectively managed, and how to reason efficiently with them.
Abstract: Data-oriented systems and applications are at the centre of current developments of the World Wide Web (WWW). On the Web of Data (WoD), information sources can be accessed and processed for many purposes. Users need to be aware of any licences or terms of use, which are associated with the data sources they want to use. Conversely, publishers need support in assigning the appropriate policies alongside the data they distribute. In this work, we tackle the problem of policy propagation in data flows - an expression that refers to the way data is consumed, manipulated and produced within processes. We pose the question of what kind of components are required, and how they can be acquired, managed, and deployed, to support users on deciding what policies propagate to the output of a data-intensive system from the ones associated with its input. We observe three scenarios: applications of the Semantic Web, workflow reuse in Open Science, and the exploitation of urban data in City Data Hubs. Starting from the analysis of Semantic Web applications, we propose a data-centric approach to semantically describe processes as data flows: the Datanode ontology, which comprises a hierarchy of the possible relations between data objects. By means of Policy Propagation Rules, it is possible to link data flow steps and policies derivable from semantic descriptions of data licences. We show how these components can be designed, how they can be effectively managed, and how to reason efficiently with them. In a second phase, the developed components are verified using a Smart City Data Hub as a case study, where we developed an end-to-end solution for policy propagation. Finally, we evaluate our approach and report on a user study aimed at assessing both the quality and the value of the proposed solution.

12 citations


Cites background from "A comparison of research data manag..."

  • ...Recently, scientific data repositories started providing means to store and publish data related to research articles with the aim to enable persistence and reuse [Amorim et al. (2016); Burton et al. (2015); Candela et al. (2015); Eschenfelder and Johnson (2014)]. Among these are initiatives like Dryad1 - established by a group of journals adopting a common joint data archiving policy (JDAP) and Zenodo2, born in the context of the EU FP7 project OpenAIREplus [Manghi et al. (2012)]. The concept of scientific data is defined as “entities used as evidence of phenomena for the purpose of research or scholarship” [Borgman (2015)]. This definition applies to a large variety of data artefacts, spanning from data tables to relational databases, textual documents or even binary data like photographs, charts etc. The characterisation of the terms of use of these data artefacts (licensing) is fundamental to enable an appropriate and informed reuse by third parties. These terms vary depending on each case, and can include the protection of the interests of the dataset producer (e.g., the requirement for attribution) or the interests of subjects involved in the study, for example a requirement for data masking with the objective of protecting personal data [Eschenfelder and Johnson (2014)]....

    [...]

  • ...Recently, scientific data repositories started providing means to store and publish data related to research articles with the aim to enable persistence and reuse [Amorim et al. (2016); Burton et al. (2015); Candela et al. (2015); Eschenfelder and Johnson (2014)]. Among these are initiatives like Dryad1 - established by a group of journals adopting a common joint data archiving policy (JDAP) and Zenodo2, born in the context of the EU FP7 project OpenAIREplus [Manghi et al. (2012)]....

    [...]

  • ...Indeed, we record very limited support for licenses and terms and conditions management in existing data cataloguing approaches [Amorim et al. (2016); Assaf et al. (2015)]....

    [...]

  • ...Recently, scientific data repositories started providing means to store and publish data related to research articles with the aim to enable persistence and reuse [Amorim et al. (2016); Burton et al. (2015); Candela et al....

    [...]

  • ...Indeed, we record very limited support for licenses and terms and conditions management in existing data cataloguing approaches [Amorim et al. (2016); Assaf et al....

    [...]


Journal ArticleDOI
TL;DR: An assessment framework to evaluate environmental open data in urban platforms under the data life cycle approach is proposed and the results of its application in six data portals are illustrated.
Abstract: Through a literature review, this paper proposes an assessment framework to evaluate environmental open data in urban platforms under the data life cycle approach. For this purpose, a set of quanti...

12 citations


References
More filters

Journal ArticleDOI
TL;DR: The thinking about digital preservation over the past five years has advanced to the point where the needs are widely recognized and well defined, the technical approaches at least superficially mapped out, and the need for action is now clear.
Abstract: In the fall of 2002, something extraordinary occurred in the continuing networked information revolution, shifting the dynamic among individually driven innovation, institutional progress, and the evolution of disciplinary scholarly practices. The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage to accelerate changes taking place in scholarship and scholarly communication, both moving beyond their historic relatively passive role of supporting established publishers in modernizing scholarly publishing through the licensing of digital content, and also scaling up beyond ad-hoc alliances, partnerships, and support arrangements with a few select faculty pioneers exploring more transformative new uses of the digital medium. Many technology trends and development efforts came together to make this strategy possible. Online storage costs have dropped significantly; repositories are now affordable. Standards like the open archives metadata harvesting protocol are now in place; some progress has also been made on the standards for the underlying metadata itself. The thinking about digital preservation over the past five years has advanced to the point where the needs are widely recognized and well defined, the technical approaches at least superficially mapped out, and the need for action is now clear. The development of free, publicly accessible journal article collections in disciplines such as high-energy physics has demonstrated ways in which the network can change scholarly communication by altering dissemination and access patterns; separately, the development of a series of extraordinary digital works had at least suggested the potential of creative authorship specifically for the digital medium to transform the presentation and transmission of scholarship. The leadership of the Massachusetts Institute of Technology (MIT) in the development and deployment of the DSpace institutional repository system , created in collaboration with the Hewlett Packard Corporation,

908 citations


"A comparison of research data manag..." refers background in this paper

  • ...keeping record of research contributions and ensuring the correct licensing of their contents [17, 22]....

    [...]


01 Jan 2013
TL;DR: Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: to reproduce or to verify research, to make results of publicly funded research available to the public, to enable others to ask new questions of extant data, and to advance the state of research and innovation.
Abstract: We must all accept that science is data and that data are science, and thus provide for, and justify the need for the support of, much-improved data curation. (Hanson, Sugden, & Alberts) Researchers are producing an unprecedented deluge of data by using new methods and instrumentation. Others may wish to mine these data for new discoveries and innovations. However, research data are not readily available as sharing is common in only a few fields such as astronomy and genomics. Data sharing practices in other fields vary widely. Moreover, research data take many forms, are handled in many ways, using many approaches, and often are difficult to interpret once removed from their initial context. Data sharing is thus a conundrum. Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: (1) to reproduce or to verify research, (2) to make results of publicly funded research available to the public, (3) to enable others to ask new questions of extant data, and (4) to advance the state of research and innovation. These rationales differ by the arguments for sharing, by beneficiaries, and by the motivations and incentives of the many stakeholders involved. The challenges are to understand which data might be shared, by whom, with whom, under what conditions, why, and to what effects. Answers will inform data policy and practice. © 2012 Wiley Periodicals, Inc.

575 citations


"A comparison of research data manag..." refers background in this paper

  • ...Involving the researchers in the deposit stage is a challenge, as the investment in metadata production for data publication and sharing is typically higher than that required for the addition of notes that are only intended for their peers in a research group [7]....

    [...]

  • ...tial data production context, making it possible for other researchers to reuse the data [7]....

    [...]

  • ...Several stakeholders are involved in dataset description throughout the data management workflow, playing an important part in their management and dissemination [7, 23]....

    [...]


Journal ArticleDOI
Abstract: We must all accept that science is data and that data are science, and thus provide for, and justify the need for the support of, much-improved data curation. (Hanson, Sugden, & Alberts) Researchers are producing an unprecedented deluge of data by using new methods and instrumentation. Others may wish to mine these data for new discoveries and innovations. However, research data are not readily available as sharing is common in only a few fields such as astronomy and genomics. Data sharing practices in other fields vary widely. Moreover, research data take many forms, are handled in many ways, using many approaches, and often are difficult to interpret once removed from their initial context. Data sharing is thus a conundrum. Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: (1) to reproduce or to verify research, (2) to make results of publicly funded research available to the public, (3) to enable others to ask new questions of extant data, and (4) to advance the state of research and innovation. These rationales differ by the arguments for sharing, by beneficiaries, and by the motivations and incentives of the many stakeholders involved. The challenges are to understand which data might be shared, by whom, with whom, under what conditions, why, and to what effects. Answers will inform data policy and practice. © 2012 Wiley Periodicals, Inc.

532 citations


Proceedings ArticleDOI
01 Jan 2001
TL;DR: The recent history of the OAI is described - its origins in promoting E-Prints, the broadening of its focus, the details of its technical standard for metadata harvesting, the applications of this standard, and future plans.
Abstract: The Open Archives Initiative (OAI) develops and promotes interoperabil ity solutions that aim to facilitate the efficient dissemination of content The roots of the OAI lie in the E-Print community Over the last year its focus has been extended to include all content providers This paper describes the recent history of the OAI - its origins in promoting E-Prints, the broadening of its focus, the details of its technical standard for metadata harvesting, the applications of this standard, and future plans

413 citations


Book
01 Jun 2012
TL;DR: This document is a technical Recommended Practice for use in developing a broader consensus on what is required for an archive to provide permanent, or indefinite Long Term, preservation of digital information.
Abstract: This document is a technical Recommended Practice for use in developing a broader consensus on what is required for an archive to provide permanent, or indefinite Long Term, preservation of digital information. This Recommended Practice establishes a common framework of terms and concepts which make up an Open Archival Information System (OAIS). It allows existing and future archives to be more meaningfully compared and contrasted. It provides a basis for further standardization within an archival context and it should promote greater vendor awareness of, and support of, archival requirements. CCSDS has changed the classification of Reference Models from Blue (Recommended Standard) to Magenta (Recommended Practice). Through the process of normal evolution, it is expected that expansion, deletion, or modification of this document may occur. This Recommended Practice is therefore subject to CCSDS document management and change control procedures, which are defined in the Procedures Manual for the Consultative Committee for Space Data Systems. Current issue updates document based on input from user community (note). Current versions of CCSDS documents are maintained at the CCSDS Web site: http://www.ccsds.org/

407 citations


Additional excerpts

  • ...As these repositories deal with a reasonably small set of managed formats for deposit, several reference models, such as the Open Archival Information System (OAIS) [12], are currently in use to ensure preservation and to promote metadata interchange and dissemination....

    [...]

  • ...As these repositories deal with a reasonably small set of managed formats for deposit, several reference models, such as the OAIS (Open Archival Information System) [12] are currently in use to ensure preservation and to promote metadata interchange and dissemination....

    [...]


Related Papers (5)
Frequently Asked Questions (1)
Q1. What are the contributions in this paper?

This paper is a synthetic overview of current platforms that can be used for data management purposes. Adopting a pragmatic view on data management, the paper focuses on solutions that can be adopted in the longtail of science, where investments in tools and manpower are modest. First, a broad set of data management platforms is presented—some designed for institutional repositories and digital libraries—to select a short list of the more promising ones for data management. This paper is an extended version of a previously published comparative study. The results show that there is still plenty of room for improvement, mainly regarding the specificity of data description in different domains, as well as the potential for integration of the data management platforms with existing research management tools.