scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Reliable Intelligent Environments in 2015"


Journal ArticleDOI
TL;DR: This paper designs an architecture for the IoT, based on the Information-Centric Networking (ICN) paradigm, and leverage a particular ICN architecture, the Publish-Subscribe Internetworking architecture, to design an IoT architecture and presents three security solutions that enable access control, secure delegation of information storage and trust based on information identifiers.
Abstract: Recent developments in sensors, devices, identification technologies, and wireless networking have fueled the vision of the Internet of Things (IoT). Small devices with processing, sensing, and connectivity capabilities can be connected to the Internet and produce vast amounts of meaningful information. At the same time identification technologies, such as RFID, enable the association of information with “things”. The information produced by things, or associated with things, will be both huge and sensitive. For this reason new architectures for disseminating and processing this information in a reliable and efficient way should be explored. In this paper, we present an architecture for the IoT, based on the Information-Centric Networking (ICN) paradigm. ICN architectures are built around information and information identifiers and they provide mechanisms for advertising, finding, and retrieving information. We leverage a particular ICN architecture, the Publish-Subscribe Internetworking architecture, to design an IoT architecture and we present three security solutions that enable access control, secure delegation of information storage and trust based on information identifiers.

38 citations


Journal ArticleDOI
TL;DR: A reference framework for discussing resilience of computational systems is introduced, interpreted here as the emerging result of a dynamic process that represents the dynamic interplay between the behaviors exercised by a system and those of the environment it is set to operate in.
Abstract: The present article introduces a reference framework for discussing resilience of computational systems. Rather than a property that may or may not be exhibited by a system, resilience is interpreted here as the emerging result of a dynamic process. Said process represents the dynamic interplay between the behaviors exercised by a system and those of the environment it is set to operate in. As a result of this interpretation, coherent definitions of several aspects of resilience can be derived and proposed, including elasticity, change tolerance, and antifragility. Definitions are also provided for measures of the risk of unresilience as well as for the optimal match of a given resilient design with respect to the current environmental conditions. Finally, a resilience strategy based on our model is exemplified through a simple scenario.

28 citations


Journal ArticleDOI
TL;DR: The paper will review the literature concerning methodologies and tools that directly involve users and have been specifically applied or adopted for intelligent environments and propose a set of guidelines that system designers should follow to ensure user confidence in their intelligent environments.
Abstract: Intelligent environments aim at supporting and assisting users in their daily activities. Their reliability, i.e., the capability of correctly accomplishing the intended tasks and of limiting or avoiding damage in case of malfunctions, is essential as for any user-facing technology. One aspect of reliability, often neglected, is guaranteeing the consistency between system operation and user expectations, so that users may build confidence over the correct behavior of the system and its reaction to their actions. The paper will review the literature concerning methodologies and tools that directly involve users and have been specifically applied or adopted for intelligent environments, throughout the entire design flow—from requirements gathering to interface design. The paper will then propose, building on top of the previous analysis, a set of guidelines that system designers should follow to ensure user confidence in their intelligent environments.

19 citations


Journal ArticleDOI
TL;DR: This work advocates the necessity and introduces a novel approach of antifragile cyber security within SDN paradigm and proposes a unified model for integrating both approaches of “Security with SDN” and “ security for SDN" to achieve the overall objective of protecting information from cyber threats in this globally connected internetwork.
Abstract: With each passing day, the information and communication technologies are evolving with more and more information shared across the globe using the internet superhighway. The threats to information, while connected to the cyber world are getting more targeted, voluminous, and sophisticated requiring new antifragile and resilient network security mechanisms. Whether the information is being processed in the application, in transit within the network or residing in the storage, it is equally susceptible to attack at every level of abstraction and cannot be handled in isolation as the case has been with conventional security mechanisms. The advent of Software-Defined Networks (SDN) has given a new outlook to information protection, where the network can aid in the design of a system that is secure and dependable in case of cyber threats. The nature of SDN, mainly its programmability and centrality of network information and control has led us to think of security in an antifragile perspective. Our networks can now thrive and grow stronger when they are exposed to volatility by overwhelming cyber threats. However, SDN infrastructure itself is susceptible to severe threats that may mutilate the provision of its usability as security provider. Both these perspectives of “Security with SDN” and “Security for SDN” have invited research and innovations, yet both these approaches remain disintegrated, failing to support each other. The contribution of this paper is threefold, with first reviewing the current state of the art work for both perspectives of SDN security. Second, it advocates the necessity and introduces a novel approach of antifragile cyber security within SDN paradigm and finally it proposes a unified model for integrating both approaches of “Security with SDN” and “Security for SDN” to achieve the overall objective of protecting our information from cyber threats in this globally connected internetwork.

13 citations


Journal ArticleDOI
TL;DR: This extended editorial takes an exploration journey into the exciting new area of “reliable intelligent environments” (RIEs) and presents a selection of approaches that have been put forward to design, verify, and operate IEs in a manner so that users can rely on Intelligent Environment systems.
Abstract: As envisioned by Weiser, computing is in the process of being everywhere and becoming invisible. As Milner noticed, the question now is whether we shall understand this ubiquitous computer we are building. This is especially true as designers are more and more using complex techniques for every component of the system and building systems which are made of increasingly heterogenous parts. With this extended editorial, we embark on an exploration journey into the exciting new area of “reliable intelligent environments” (RIEs). Taking the perspective of an RIE engineer, we present a selection of approaches that have been put forward to design, verify, and operate IEs in a manner so that users can rely on Intelligent Environment systems. We outline crucial challenges: the situatedness which exposes IE to challenges similar to those known from robotics and control systems, the embedding of human users and the safety, privacy, and usability requirements thus entailed, and the amounts of data produced by sensors and actuators, which require advanced reasoning and learning mechanisms to handle them in a reliable way in real-time. We also sketch the opportunities reliable IEs provide to developing new markets and products.

10 citations


Journal ArticleDOI
TL;DR: An architecture for advanced remote data processing in a secure, smart and versatile client–server environment that is capable of integrating pre-existing local software that has several benefits: increase of the system throughput, easy upgradability, maintainability and scalability.
Abstract: In recent years, the need for data collection and analysis is growing in many scientific disciplines. This is consequently causing an increase of research in automated data management and data mining to create reliable methods for data analysis. To deal with the need for smart environments and big computational resources, some previous works proposed to address the problem by moving on remote processing, with the aim of sharing supercomputer resources, algorithms and costs. Following this trend, in this work we propose an architecture for advanced remote data processing in a secure, smart and versatile client–server environment that is capable of integrating pre-existing local software. In order to assess the feasibility of our proposal, we developed a case study in the context of an image-based medical diagnostic environment. Our tests demonstrated that the proposed architecture has several benefits: increase of the system throughput, easy upgradability, maintainability and scalability. Moreover, for the scenario we have considered, the system showed a very low transmission overhead which settles on about 2.5 % for the widespread 10/100 mbps. Security has been achieved using client–server certificates and up-to-date standards.

9 citations


Journal ArticleDOI
TL;DR: This paper shows how its approach makes it possible to design intelligent environments able to closely follow a system’s horizontal and vertical organization and to artificially augment its features by serving as crosscutting optimizers and as enablers of antifragile behaviors.
Abstract: Classic approaches to general systems theory often adopt an individual perspective and a limited number of systemic classes. As a result, those classes include a wide number and variety of systems that result equivalent to each other. This paper presents a different approach: first, systems belonging to a same class are further differentiated according to five major general characteristics. This introduces a “horizontal dimension” to system classification. A second component of our approach considers systems as nested compositional hierarchies of other sub-systems. The resulting “vertical dimension” further specializes the systemic classes and makes it easier to assess similarities and differences regarding properties such as resilience, performance, and quality of experience. Our approach is exemplified by considering a telemonitoring system designed in the framework of Flemish project “Little Sister”. We show how our approach makes it possible to design intelligent environments able to closely follow a system’s horizontal and vertical organization and to artificially augment its features by serving as crosscutting optimizers and as enablers of antifragile behaviors.

6 citations


Journal ArticleDOI
TL;DR: The main challenges for engineering adaptive systems are presented by putting more emphasis on the design and the development of distributed and adaptive algorithms that allow system entities to select the best suitable strategy/action in order to drive the system to thebest suitable behavior according to the current state of the system and environment changes.
Abstract: This paper describes existing work related to the development of adaptive systems and approaches in ubiquitous and pervasive environments and sheds more light on how features from natural and biological systems could be exploited for engineering adaptive systems. Ubiquitous and pervasive systems are composed of different heterogeneous parts or entities that interact and perform actions favoring the emergence of global desired behavior. Furthermore, in this type of systems entities might join or leave without disturbing the collective, and the system should self-organize and continue performing their goals. Therefore, entities must self-evolve and self-improve by learning from their interactions with the environment. In this paper, the main challenges for engineering these systems are presented by putting more emphasis on the design and the development of distributed and adaptive algorithms that allow system entities to select the best suitable strategy/action in order to drive the system to the best suitable behavior according to the current state of the system and environment changes. We also highlight specific aspects being investigated via illustrative examples in order to show the usefulness of natural and biological system principles for developing adaptive approaches.

5 citations


Journal ArticleDOI
TL;DR: An introductory overview of statistical reliability theory is presented, and the applicability of this to the context of Intelligent Environments—particularly those involving safety critical or other sensitive issues—is discussed, along with how such reliability modelling can be used to influence the design, implementation and application of an Intelligent Environment.
Abstract: Intelligent Environments often require the integration of multi-modal sensing and actuating technologies with high performance real-time computation, including artificial intelligence systems for analysis, learning patterns and reasoning. Such systems may be complex, and involve multiple components. However, in order to make them affordable, Intelligent Environments sometimes require many of their individual components to be low-cost. Nevertheless, in many applications—including safety-critical systems, and systems monitoring the health and well-being of vulnerable individuals, it is essential that these Intelligent Environment systems are reliable, which the issue of affordability must not compromise. If such environments are to find real application and deployment in these types of domain, it is necessary to be able to obtain accurate predictions of how probable any potential failure of the system is in any given timeframe, and of statistical parameters regarding the expected time to the first, or between successive, failures. Such quantities must be kept within what are deemed to be acceptable tolerances if the Intelligent Environment is to be suitable for applications in these critical areas, without requiring excessively high levels of human monitoring and/or intervention. In this paper, an introductory overview of statistical reliability theory is presented. The applicability of this to the context of Intelligent Environments—particularly those involving safety critical or other sensitive issues—is discussed, along with how such reliability modelling can be used to influence the design, implementation and application of an Intelligent Environment.

5 citations


Journal ArticleDOI
TL;DR: The use of this language for allocating cloud resources to maximise service dependability by definition of a model-driven approach able to guide the software engineering to define a cloud infrastructure using a semi-automated process using both high-level languages such as UML as well as Bayesian networks.
Abstract: Bayesian networks have demonstrated their capability in several applications spanning from reasoning under uncertainty in artificial intelligence to dependability modelling and analysis. This paper focuses on the use of this language for allocating cloud resources to maximise service dependability. This objective is accomplished by the definition of a model-driven approach able to guide the software engineering to define a cloud infrastructure (applications, services, virtual and concrete resources) using a semi-automated process. This process exploits both high-level languages such as UML as well as Bayesian networks. Using all their features (backward analysis, ease of usage, low analysis time), Bayesian networks are used in this process as a driver for the optimization, learning and estimation phases. The paper discusses all the issues that the application of Bayesian networks in the proposed process arises.

5 citations


Journal ArticleDOI
TL;DR: This paper presents a novel criterion, called assured reliability and resilience level (ARRL), that defines QoS in a normative way, largely by taking into account how the system deals with faults.
Abstract: Systems engineering has emerged because of the growing complexity of systems and the growing need for systems to provide a reliable service. The latter has to be defined in a wider context of trustworthiness and covering aspects like safety, security, human–machine interface design and even privacy. What the user expects is an acceptable quality of service (QoS), a property that is difficult to measure as it is a qualitative one. In this paper, we present a novel criterion, called assured reliability and resilience level (ARRL) that defines QoS in a normative way, largely by taking into account how the system deals with faults. ARRL defines 7 levels of which the highest one can be described as the level where the system becomes antifragile.

Journal ArticleDOI
TL;DR: A decision-oriented dynamic solution, namely AADM, which is an adaptive aggregation-based decision model based on the link quality and its live statistics, which outperforms existing static approaches in terms of packet loss, throughput and delay.
Abstract: Wireless mesh networks (WMNs) have found an interesting alternative application in public safety and disaster recovery as they are enabled with key features such as fault tolerance, broadband support, and interoperability. However, their sparsely distributed wireless nodes need to frequently share the control packets among each other for successful data transfer. These packets give rise to a considerable amount of control overhead, especially for multimedia traffic, which is not bearable in jeopardy situations of network disaster such as 9/11 and Hurricane Katrina. Hence, to avoid the huge cost of control overhead, aggregation is supposed to be one of the handy solutions for building a new object from one or more existing objects of network traffic. Generally, aggregation has three types to be executed on ‘packets’, ‘frames’ and the ‘links’ of a network. The network decision that when and which type of aggregation is suitable in a given scenario is a complex balancing act. Because, it is bounded by various live statistics of the communication link (e.g., buffer size, link quality, bandwidth, maximum transmission unit, and delay etc.) and thus, can considerably affect the network efficiency. If such statistics do not support aggregation or its certain type, network performance may cause worse affects. This paper proposes a decision-oriented dynamic solution, namely AADM, which is an adaptive aggregation-based decision model. Based on the link quality and its live statistics, AADM dynamically takes judicious decisions about aggregation and its specific type to achieve the desired outcome. Despite network scalability, quality of service, power optimization, and network efficiency, it also reduces control traffic in WMNs. OMNET++ simulations are used to verify AADM. Simulation results have shown that AADM outperforms existing static approaches in terms of packet loss, throughput and delay.

Journal ArticleDOI
TL;DR: A Hyper Text Transfer Protocol-based rate adaptive transcoding and streaming technique that maintains its multimedia streaming quality, especially when network grows congested and fragile during Disaster Management and Recovery operations.
Abstract: Delivery of multimedia content over network has always been given vital importance since its inception. Different models for content delivery have been proposed with respect to multiple factors like bandwidth, quality of content, importance, and delivery method, etc. In this paper, we propose a Hyper Text Transfer Protocol-based rate adaptive transcoding and streaming technique that maintains its multimedia streaming quality, especially when network grows congested and fragile during Disaster Management and Recovery operations. The proposed solution provides users high-quality audio and video contents even when network resources are limited and fragile. We developed a real-time test bed that can deliver high-quality streaming videos by transcoding the original video in real time to a scalable codec, which allows streaming adaptation according to network dynamics. We also performed validation tests to ensure multimedia delivery in cloud computing, functionality on our hosted website for live communication between transmitter and receiver. From our experiments, we observed that the rate adaptive transcoding and streaming can efficiently deliver information services such as live multimedia in emergency and disasters under very low bandwidth environments.