scispace - formally typeset
Search or ask a question

Showing papers by "Markus Helfert published in 2017"


Journal ArticleDOI
11 Oct 2017-PLOS ONE
TL;DR: Evidence is provided for positive effects of digital self-tracking by patients undergoing cardiac rehabilitation on performance of the cardiovascular system and the use of smart wearables can prolong the success of the rehabilitation outside of the organized rehabilitation setting.
Abstract: Research has shown that physical activity is essential in the prevention and treatment of chronic diseases like cardiovascular disease (CVD). Smart wearables (e.g., smartwatches) are increasingly used to foster and monitor human behaviour, including physical activity. However, despite this increased usage, little evidence is available on the effects of smart wearables in behaviour change. The little research which is available typically focuses on the behaviour of healthy individuals rather than patients. In this study, we investigate the effects of using smart wearables by patients undergoing cardiac rehabilitation. A field experiment involving 29 patients was designed and participants were either assigned to the study group (N = 13 patients who finished the study and used a self-tracking device) or the control group (N = 16 patients who finished the study and did not use a device). For both groups data about physiological performance during cardiac stress test was collected at the beginning (baseline), in the middle (in week 6, at the end of the rehabilitation in the organized rehabilitation setting), and at the end of the study (after 12 weeks, at the end of the rehabilitation, including the organized rehabilitation plus another 6 weeks of self-organized rehabilitation). Comparing the physiological performance of both groups, the data showed significant differences. The participants in the study group not only maintained the same performance level as during the midterm examination in week 6, they improved performance even further during the six weeks that followed. The results presented in this paper provide evidence for positive effects of digital self-tracking by patients undergoing cardiac rehabilitation on performance of the cardiovascular system. In this way, our study provides novel insight about the effects of the use of smart wearables by CVD patients. Our findings have implications for the design of self-management approaches in a patient rehabilitation setting. In essence, the use of smart wearables can prolong the success of the rehabilitation outside of the organized rehabilitation setting.

22 citations


Book ChapterDOI
14 Jun 2017
TL;DR: In this article, the authors identify the essential requirements of enterprise architecture in smart cities and compare them with the current smart city frameworks, which are used to review and compare current smart cities frameworks.
Abstract: There is a significant challenge in smart cities implementations One challenge is to align smart city strategies with the impact on quality of life Stakeholders’ concerns are multiple and diverse, and there is a high interdependency and heterogeneity of technologies and solutions To tackle this challenge, researchers have suggested to view cities as enterprises and apply an Enterprise Architecture (EA) approach This approach specifies core requirements on business, information, and technology domains, which are essential to model architecture components and to establish relations between these domains Existing smart cities frameworks describe different components and domains However, the main domain requirements and the relations between them are still missing This paper identifies essential requirements of enterprise architecture in smart cities These requirements will be used to review and compare current smart city frameworks

21 citations


Proceedings Article
27 Apr 2017
TL;DR: This paper aims at introducing a taxonomy for the required elements needed to be taken into account during the design of smart services and is evaluated using a real case study in a European smart city council.
Abstract: Smart cities use ICT to improve citizens’ quality of life Therefore, to address the citizens’ needs and meeting the smart city’s quality factors, defining appropriate goals and objectives is paramount However, a considerable count of services does not have a goal to respond to the smart cities’ demands Defining stakeholders’ needs, setting consequent objectives and specifying other technical requirements happen during the design phase of the services Therefore, there is a need to provide a view of the required smart considerations This paper aims at introducing a taxonomy for the required elements needed to be taken into account during the design of smart services The proposed taxonomy is evaluated using a real case study in a European smart city council The outcome of this research contributes to defining an architecture for designing more effective services in terms of enabling responses to citizens’ concerns and meeting the smart city quality requirements

15 citations


Book ChapterDOI
01 Jan 2017
TL;DR: This chapter describes the barriers and suggests solutions to realise specific domains that can benefit from integration of buildings information with other live data, which can impact and improve the quality of various e-services.
Abstract: Information generated by smart buildings is a valuable asset that can be utilised by various groups of stakeholders in smart cities. These stakeholders can benefit from such information in order to provide additional valuable services. The added value is achievable if there is access to buildings information integrated with the live data being generated and collected from smart devices and sensors residing within the Internet of Things (IoT) environment. Notwithstanding the prominence of this combination, there are some barriers relating to the integration of buildings information with the live data. With the aim of examining such barriers, this chapter primarily focuses on information exchanges between various domains in smart cities. It also provides a vision on specific domains that can benefit from integration of buildings information with other live data. This can impact and improve the quality of various e-services. This chapter describes the barriers and suggests solutions to realise these visions. At the end of this chapter, a summary of the barriers is provided and discussed followed by proposals for future research topics to provide solutions to the inherent barriers.

8 citations


Book ChapterDOI
28 Aug 2017
TL;DR: It is proposed that a data quality prediction model can be used as one of countermeasures to reduce the Data Quality Bullwhip Effect and results indicate that data quality success is a critical practice, and predicting data quality improvements can been used to decrease the variability of the data quality index in a long run.
Abstract: Over the last years many data quality initiatives and suggestions report how to improve and sustain data quality. However, almost all data quality projects and suggestions focus on the assessment and one-time quality improvement, especially, suggestions rarely include how to sustain the continuous data quality improvement. Inspired by the work related to variability in supply chains, also known as the Bullwhip effect, this paper aims to suggest how to sustain data quality improvements and investigate the effects of delays in reporting data quality indicators. Furthermore, we propose that a data quality prediction model can be used as one of countermeasures to reduce the Data Quality Bullwhip Effect. Based on a real-world case study, this paper makes an attempt to show how to reduce this effect. Our results indicate that data quality success is a critical practice, and predicting data quality improvements can be used to decrease the variability of the data quality index in a long run.

7 citations


Posted Content
TL;DR: This framework provides a systematic view on quality of information and IS by considering user and IS developer’s perspectives, different quality factors are identified for various abstraction levels and helps to retrieve the root cause of IS defects.
Abstract: Quality is a multidimensional concept that has different meanings in different contexts and perspectives. In the domain of Information system, quality is often understood as the result of an IS development process and as the quality of an IS product. Many models and frameworks have been proposed for evaluating IS quality. However, as yet there is not a commonly accepted framework or standard of IS quality. Typically, researchers propose a set of characteristics, so-called quality factors contributing to the quality of IS. Different stakeholders perspectives are resulting in multiple definitions of quality factors of IS. For instance, some approaches are based on the IS delivery process for the selection of quality factors; while some other approaches do not clearly explain the rationale of their selection. Moreover, often relations or impacts among selected quality factors are not taken into account. Quality aspects of information are frequently considered isolated from IS quality. The impact of IS quality on information quality seems to be neglected in most approaches. Our research aims to incorporate these levels, by which we propose an IS quality framework based on IS architecture. Considering user and IS developers perspectives, different quality factors are identified for various abstraction levels. Besides, the presentation on impacts among different quality factors helps to retrieve the root cause of IS defects. Thus, our framework provides a systematic view on quality of information and IS.

6 citations


Journal ArticleDOI
TL;DR: A concept -the virtual innovation space-to take the most advantages from opportunities that the Internet gives us for increasing efficiency of learning entrepreneurship for IT students is introduced.
Abstract: This paper describes a framework that assists the inclusion of Entrepreneurship into computing study programs. It has been developed within a European Tempus Project and is built following a process-oriented view of innovation and entrepreneurship. It outlines key activities and capabilities of entrepreneurship. An approach is presented that combines existing online tools with the traditional methodologies for creating courses on entrepreneurship. We introduce a concept -the virtual innovation space-to take the most advantages from opportunities that the Internet gives us for increasing efficiency of learning entrepreneurship for IT students. A summary of the course content on entrepreneurship that is developed and used by the project partners for their local courses is presented. We present and discuss feedback received from the project partners and describe some student projects and experiences. The work in this paper can be useful for other universities and similar projects to compare their effort and receive some justification or ideas for their initiatives.

6 citations


23 Nov 2017
TL;DR: A potential approach is proposed to make the integration of the building information with the live data possible in the form of a three-phase process and its usability is demonstrated by a few use-cases.
Abstract: The building information is invaluable in the facility management industry in order to provide and deliver a timely and professional analysis and consulting support for more effective management services, (e.g. energy management). Nowadays, buildings are equipped with immense number of Internet of Things (IoT) and smart devices. These devices are producing a vast amount of live data about the building. Typically, the captured information from different information sources are stored in heterogeneous repositories. Various information resources together with the live data can be used by the facility management industry to speed up the maintenance processes and improve efficiency of services. To provide more added-value services, they can also benefit from integration of the building information with the live data. There is no doubt that integrating the information can provide value; nonetheless, there are some barriers and considerations when combining the building information with the live data. This prevents many industries like the facility management industry to fully benefit from this integration. In this paper, we introduce the existing barriers to integrate information and live data from the academia and industry perspectives. Subsequently, a potential approach is proposed to make the integration of the building information with the live data possible. The approach is in the form of a three-phase process and its usability is demonstrated by a few use-cases. The output of the process will be beneficial for diverse ranges of users, e.g. the facility management industry.

5 citations


Proceedings Article
01 Jan 2017
TL;DR: A pilot study and initial analysis of scenarios in Irish IT industry sector where infringement of the data protection laws occurred is presented, to identify directions and trends with respect to the type of breach/disclosure and who was involved in the process.
Abstract: With a highly creative and talented workforce, an open economy and a competitive corporate tax environment, Ireland had successfully attracted top global information technology firms. However, with the introduction of the new policies and regulation posed by the General Data Protection Regulation (GDPR), companies are facing new challenges in terms of tougher penalties, stricter internal policies, and privacy training programs. In this emerging research, we present a pilot study and initial analysis of scenarios in Irish IT industry sector where infringement of the data protection laws occurred. The aim of the analysis is to identify directions and trends with respect to the type of breach/disclosure, where the breach occurred, who was involved in the process, etc. We anticipate that the outcomes of this study assist defining clearer requirements for IT companies in Ireland in relation to the GDPR.

5 citations


Proceedings Article
01 Jan 2017
TL;DR: A model of factors perceived by an open data services business as the most relevant in explaining adoption of open government data for commercial service innovation in cities is proposed.
Abstract: City councils produce large amounts of data As this data becomes available, and as information and communication technology capabilities are in place to manage and exploit this data, open government data is seen as becoming more and more valuable as a catalyst for service innovation and economic growth Notwithstanding this, evidence of open data adoption is currently largely scattered and anecdotal This is reflected in the lack of literature focusing on users of open data for commercial purposes This research aims to address this gap and contributes to the IS open data services debate by proposing a model of factors perceived by an open data services business as the most relevant in explaining adoption of open government data for commercial service innovation in cities Adopting an inductive reasoning approach through qualitative methods was critical to capture the complexity of the open data services ecosystem perceived by those reusing this data

5 citations


Proceedings ArticleDOI
01 Dec 2017
TL;DR: The development of a general interface architecture to bridge the recognised gaps is proposed that would follow knowledge management principles that will define how the information created by different smart systems will move through the defined architecture to become accessible by other smart systems.
Abstract: There is plenty of information created by various smart city services all around the world. Two main issues of smart cities are integration of information and improvement of empowerment of citizens' life. These problems mostly stem from the legacy point of view to cities. In legacy cities various systems define their own standards and protocols and there is no common language of communication between systems. Overcoming this frustration in order to be able to exchange huge amounts of valuable created information, this paper intends to propose the development of a general interface architecture to bridge the recognised gaps. This interface would follow knowledge management principles that will define how the information created by different smart systems will move through the defined architecture to become accessible by other smart systems. Afterward we explain how the proposed architecture could facilitate information exchange between different smart city systems. Outcome of this research will be a business scenario for the proposed architecture utilising Business Process Modelling and Notation (BPMN) tool. This scenario provides the ability for existing smart systems to interconnect with other systems in smart cities.

Proceedings ArticleDOI
31 Oct 2017
TL;DR: This paper defines six criteria for the user-level usage data, analyse the existing usage data extraction techniques and proposes a usage data extracts framework adhering to the defined criteria.
Abstract: Features or functionalities provided by cloud-based applications are accessed by users through various interfaces such as web browser, mobile app, and command line interface. Yet for monitoring cloud-based applications, software developers and researchers have focused on web browsers. Software updates are provided for such applications based on the data acquired from the cloud monitoring components but usage data of the cloud application features are difficult to extract in a cloud environment as the usage data is spread across the interfaces on the front-end and the back-end. In this paper, we focus on the usage of the cloud application features from the user perspective and how to extract these data in a cloud environment. We define six criteria for the user-level usage data, analyse the existing usage data extraction techniques and propose a usage data extraction framework adhering to the defined criteria.

Journal ArticleDOI
TL;DR: This paper proposes an approach to the development of an automatic generation of the domain-specific rules by using variability feature model and ontology definition of domain model concepts coming from Software product line engineering and Model Driven Architecture.

29 Apr 2017
TL;DR: Wang et al. as discussed by the authors proposed a set of practical guidelines for researchers and practitioners to conduct data quality management in data integration, and they found that data completeness, timeliness and consistency are critical for data quality Management in Data Integration, and data consistency should be further defined in the pragmatic level.
Abstract: Nowadays, many business intelligence or master data management initiatives are based on regular data integration, since data integration intends to extract and combine a variety of data sources, it is thus considered as a prerequisite for data analytics and management. More recently, TPC-DI is proposed as an industry benchmark for data integration. It is designed to benchmark the data integration and serve as a standardisation to evaluate the ETL performance. There are a variety of data quality problems such as multi-meaning attributes and inconsistent data schemas in source data, which will not only cause problems for the data integration process but also affect further data mining or data analytics. This paper has summarised typical data quality problems in the data integration and adapted the traditional data quality dimensions to classify those data quality problems. We found that data completeness, timeliness and consistency are critical for data quality management in data integration, and data consistency should be further defined in the pragmatic level. In order to prevent typical data quality problems and proactively manage data quality in ETL, we proposed a set of practical guidelines for researchers and practitioners to conduct data quality management in data integration.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: A re-engineering approach called architectural refactoring for restructuring on-premise application components to adopt to the cloud environment with the aim of achieving significant increase in non-functional quality attributes such as performance, scalability and maintainability of the cloud architectures is proposed.
Abstract: Cloud migration has attracted a lot of attention in both industry and academia due to the on-demand, high availability, dynamic scalable nature. Organizations choose to move their on-premise applications to adapt to the virtualized environment of the cloud where the services are accessed remotely over the internet. These applications need to be re-engineered to completely exploit the cloud infrastructure such as performance and scalability improvements over the on-premise infrastructure. This paper proposes a re-engineering approach called architectural refactoring for restructuring on-premise application components to adopt to the cloud environment with the aim of achieving significant increase in non-functional quality attributes such as performance, scalability and maintainability of the cloud architectures. This paper proposes, when needed to migrate to cloud, the application is divided into smaller components, converted into services and deployed to cloud. The paper discusses existing issues faced by software developers and engineers during cloud migration, introduces architectural refactoring as a solution and explains the generic refactoring process at an architectural level.

Book ChapterDOI
26 Apr 2017
TL;DR: In this article, the authors proposed a set of practical guidelines for researchers and practitioners to conduct data quality management when using the TPC-DI benchmark, in order to prevent data quality problems and proactively manage data quality.
Abstract: Many data driven organisations need to integrate data from multiple, distributed and heterogeneous resources for advanced data analysis. A data integration system is an essential component to collect data into a data warehouse or other data analytics systems. There are various alternatives of data integration systems which are created in-house or provided by vendors. Hence, it is necessary for an organisation to compare and benchmark them when choosing a suitable one to meet its requirements. Recently, the TPC-DI is proposed as the first industrial benchmark for evaluating data integration systems. When using this benchmark, we find some typical data quality problems in the TPC-DI data source such as multi-meaning attributes and inconsistent data schemas, which could delay or even fail the data integration process. This paper explains processes of this benchmark and summarises typical data quality problems identified in the TPC-DI data source. Furthermore, in order to prevent data quality problems and proactively manage data quality, we propose a set of practical guidelines for researchers and practitioners to conduct data quality management when using the TPC-DI benchmark.


Proceedings ArticleDOI
01 Jan 2017
TL;DR: A framework for rule generation through model translation with feature model, a high-level of the domain model to translate into low- level of rule language based on the paradigm of software reuse in terms of customisation and configuration with domain-specific rule strategies benefit mode-to-text translations.
Abstract: The domain-specific model-driven development requires effective and flexible techniques for implementing domain-specific rule generators. In this paper, we present a framework for rule generation through model translation with feature model, a high-level of the domain model to translate into low-level of rule language based on the paradigm of software reuse in terms of customisation and configuration with domain-specific rule strategies benefit mode-to-text translations. This framework is domain-specific where non-technical domain user can customise and configure the business process models. These compositions support two dimensional of translation modularity by using software product line engineering. The domain engineering is achieved by designing the domain and process model as a requirement space, it is also called template model, connecting with feature model through weaving model. The feature model is a high-level input model to customise the template model to an implementation. The application engineering is achieved by supporting the rule definition and configuring the generated rules. We discuss the development approach of the framework in a domain-specific environment; we present a case study in a Digital Content Technology (DCT)

09 Jun 2017
TL;DR: This paper proposes a framework for Metropolitan Area Enterprise Architecture, which can not only coordinate different interests and objectives from the stakeholders by layered architectural design, but also provide an integrated guideline for future ICT development for Smart Cities.
Abstract: Rapidly increasing capabilities of digital technologies and decreasing deployment costs of digital systems have enabled pervasive computing technologies in Smart Cities. However, designing an integrated Enterprise Architecture across public services in the metropolitan area for a Smart City still remains challenging. Since there are various views and strategic aspects from different Stakeholders in a Smart City, how to apply the design processes from Enterprise Architecture like TOGAF ADM to Smart Cities is still unknown. In this paper, we thus propose a framework for Metropolitan Area Enterprise Architecture, which can not only coordinate different interests and objectives from the stakeholders by layered architectural design, but also provide an integrated guideline for future ICT development for Smart Cities.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: It is found that data completeness, timeliness and consistency are critical for data quality management in data integration, and data consistency should be further defined in the pragmatic level.

12 Jan 2017
TL;DR: A novel information quality method that is context related; that is it takes the user, task and environment into account and indeed that context affects the perception of information quality.
Abstract: Information Quality is an ever increasing problem. Despite the advancements in technology and information quality investment the problem continues to grow. The context of the deployment of an information system in a complex environment and its associated information quality problems has yet to be fully examined by researchers. Our research endeavours to address this shortfall by specifying a method for context related information quality. A method engineering approach is employed to specify such a method for context related information quality dimension selection. Furthermore, the research presented in this paper examined different information systems’ context factors and their effect upon information quality dimensions both objectively and subjectively. Our contribution is a novel information quality method that is context related; that is it takes the user, task and environment into account. Results of an experiment indicate as well as feedback from practitioners confirm the application of our method and indeed that context affects the perception of information quality.

Journal ArticleDOI
TL;DR: The IASDO model as mentioned in this paper is a meta-model for information manufacturing systems (IMS) that can systematically represent the dynamic changes involved in manufacturing (or creating) an IP.
Abstract: The manufacture of an information product (IP) is akin to the manufacture of a physical product. Current approaches to model such information manufacturing systems (IMS), lack the ability to systematically represent the dynamic changes involved in manufacturing (or creating) an IP. They also have limitations to consistently include aspects of process and information management at an organizational level. This paper aims to address these limitations and presents a modelling approach, the IASDO model. Our work also represents a framework to evaluate the quality of the meta-models for IMS modelling which enable us to compare the IASDO model with current approaches.

Proceedings ArticleDOI
25 Jun 2017
TL;DR: This research follows the design science approach to propose a process model to address facility management concerns in terms of the ability to access the combination of building information with live data captured from various sources.
Abstract: Smart buildings are embedded with large amounts of latent data from different sources, e.g. IoT devices, sensors, and the like. Integration of this latent data with the buildings information can highly impact efficiency services provided by various industries such as facility management companies, utility companies, smart commerce, and so forth. To enable the integration of buildings information, diverse technologies such as Building Information Modelling (BIM) have been developed and changed the traditional approaches. Notwithstanding a plethora of research in this area, potential users of this information such as facility management companies are still unable to fully benefit from the building information. This is due to this fact that various information and data have been heterogeneously scattered across various sources. To overcome this challenge, this research follows the design science approach to propose a process model to address facility management concerns in terms of the ability to access the combination of building information with live data captured from various sources. The presented process model is introduced thoroughly by explaining the required steps to collect and integrate this information with the live data. The Artifact evaluation of the process model was undertaken via the employment of a focus group session with the construction professionals, the IoT experts, and the data analysts. Also, this paper elaborates on two industrial use-cases to demonstrate how having access to the building information effectively affects the other industries. The outcome of this research provides an open access to the integrated building information and live data for diverse range of users.

01 Jan 2017
TL;DR: This research outlines the benefits brought by IoT systems adopted in companies from different viewpoints and proposes a model of architecture for the systems and examines their process of operating by analyzing the technologies used to develop such systems and the security problems that can be met.
Abstract: Internet of Things IoT (Internet of Things) or Internet of Everything defines a network of objects that incorporates electronic circuits that allow communication via existing infrastructure (network INTERNET), wireless or cable for many purposes, including monitoring or remote control. The existence of a web interface through which a person may access the equipment of the house is useful to increase efficiency and save resources. Both own homes as well as companies, medical services, factories, state services and communities will benefit from using connected IoT environments. These environments are characterized by information sharing between equipment, devices and humans help devices and the exchange of information between these. This research outlines the benefits brought by IoT systems adopted in companies from different viewpoints. We employ a case study in order to point to real experiences and situations. A further objective of this paper is to propose a model of architecture for the systems and examine their process of operating by analyzing the technologies used to develop such systems and the security problems that can be met.

Proceedings ArticleDOI
TL;DR: In this paper, the authors address crucial issues in collaborative services such as collaboration levels, sharing data and processes due to business interdependencies between service stakeholders, and propose a model for collaborative service modelling, which is able to cover identified issues.
Abstract: Despite the dominance of the service sector in the last decades, there is still a need for a strong foundation on service design and innovation. Little attention has paid on service modelling, particularly in the collaboration context. Collaboration is considered as one of solutions for surviving or sustaining the business in the high competitive atmosphere. Collaborative services require various service providers working together according to agreements between them, along with service consumers, in order to co-produce services. In this paper, we address crucial issues in collaborative services such as collaboration levels, sharing data and processes due to business inter-dependencies between service stakeholders. Afterward, we propose a model for Collaborative Service Modelling, which is able to cover identified issues. We also apply our proposed model to modelling an example of healthcare services in order to illustrate the relevance of our modelling approach to the matter in hand.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: A meta-level design science process, based on an extended version of design science research methodology, that can be used to create requirements engineering frameworks to inform smart city service requirements engineering processes is presented.

01 Nov 2017
TL;DR: In this paper, a meta-level design science process based on an extended version of design science research methodology is proposed to create requirements engineering frameworks to inform smart city service requirements engineering processes.
Abstract: Currently there is an issue in the design process of smart city services, where citizens as the main stakeholders are not involved enough in requirements engineering In this paper, we present a meta-level design science process, based on an extended version of design science research methodology, that can be used to create requirements engineering frameworks to inform smart city service requirements engineering processes The introduced meta-level process is beneficial as it can be used to ensure that design guideline research processes are rigorous, just as design science process ensures scientific rigor in design research Additionally, we present a previous case study and frame it using the new meta-level design science process

Proceedings ArticleDOI
TL;DR: This paper presents a rule-based approach for data quality analyzing, in which it is discussed a comprehensive method for discovering dynamic integrity rules.
Abstract: Rules based approaches for data quality solutions often use business rules or integrity rules for data monitoring purpose. Integrity rules are constraints on data derived from business rules into a formal form in order to allow computerization. One of challenges of these approaches is rules discovering, which is usually manually made by business experts or system analysts based on experiences. In this paper, we present our rule-based approach for data quality analyzing, in which we discuss a comprehensive method for discovering dynamic integrity rules.

Book ChapterDOI
31 Oct 2017
TL;DR: The role of descriptive knowledge in creating prescriptive knowledge with design science research is clarified and an approach that utilizes kernel theories produced by the grounded theory research methodology in the creation of meta-level design science artefacts is presented.
Abstract: In this paper, we clarify the role of descriptive knowledge in creating prescriptive knowledge with design science research. We demonstrate the connection by presenting an approach that utilizes kernel theories produced by the grounded theory research methodology in the creation of meta-level design science artefacts. These meta-level artefacts can be used to inform the design processes of situational artefacts, such as instantiations of software and services. We demonstrate and evaluate the approach further by using it to frame an ongoing research project that creates a meta-artefact to address issues in smart city service design.

Proceedings ArticleDOI
24 Sep 2017
TL;DR: It is concluded that the fields of record keeping and long term preservation have some clear information quality issues that could benefit from a concerted approach by integrating information quality research into records management.
Abstract: The digitalization from paper-based to electronic records management results in challenges to preserve material in an authentic form. This paper explores the role in ensuring the authenticity and usability of electronic records in a long term preservation perspective. The discussion is viewed from an information quality perspective that provides a suitable lens to the topic. We identified a number of challenges that government records face, stressing the issue at hand of how to maintain authenticity and usability over a long term perspective. Challenges results from issues around authenticity, usability, the user, volume and heterogeneity and particular the time dimension. We conclude that the fields of record keeping and long term preservation have some clear information quality issues that could benefit from a concerted approach by integrating information quality research into records management. To date this seems to be missing.