scispace - formally typeset
Open AccessBook ChapterDOI

Software Engineering for Self-Adaptive Systems: A Research Roadmap

Reads0
Chats0
TLDR
The goal of this roadmap paper is to summarize the state-of-the-art and to identify critical challenges for the systematic software engineering of self-adaptive systems.
Abstract
The goal of this roadmap paper is to summarize the state-of-the-art and to identify critical challenges for the systematic software engineering of self-adaptive systems. The paper is partitioned into four parts, one for each of the identified essential views of self-adaptation: modelling dimensions, requirements, engineering, and assurances. For each view, we present the state-of-the-art and the challenges that our community must address. This roadmap paper is a result of the Dagstuhl Seminar 08031 on "Software Engineering for Self-Adaptive Systems," which took place in January 2008.

read more

Content maybe subject to copyright    Report

Software Engineering for
Self-Adaptive Systems:
A Research Road Map
(Draft Version)
Betty H.C. Cheng, Rogério de Lemos, Holger Giese, Paola Inverardi, Jeff Magee
(Dagstuhl Seminar Organizer Authors)
Jesper Andersson, Basil Becker, Nelly Bencomo, Yuriy Brun, Bojan Cukic, Giovanna
Di Marzo Serugendo, Schahram Dustdar, Anthony Finkelstein, Cristina Gacek, Kurt
Geihs, Vincenzo Grassi, Gabor Karsai, Holger Kienle, Jeff Kramer, Marin Litoiu, Sam
Malek, Raffaela Mirandola, Hausi Müller, Sooyong Park, Mary Shaw, Matthias Tichy,
Massimo Tivoli, Danny Weyns, Jon Whittle
(Dagstuhl Seminar Participant Authors)
Contact Emails: r.delemos@kent.ac.uk, holger.giese@hpi.uni-potsdam.de
ABSTRACT
Software’s ability to adapt at run-time to changing user
needs, system intrusions or faults, changing operational en-
vironment, and resource variability has been proposed as
a means to cope with the complexity of today’s software-
intensive systems. Such self-adaptive systems can config-
ure and reconfigure themselves, augment their functionality,
continually optimize themselves, protect themselves, and re-
cover themselves, while keeping most of their complexity
hidden from the user and administrator. In this paper, we
present research road map for software engineering of self-
adaptive systems focusing on four views, which we identify
as essential: requirements, modelling, engineering, and as-
surances.
Keywords
Software engineering, requirements engineering, modelling,
evolution, assurances, self-adaptability, self-organization, self-
management
1. INTRODUCTION
The simultaneous explosion of information, the integration
of technology, and the continuous evolution from software-
intensive systems to ultra-large-scale (ULS) systems requires
new and innovative approaches for building, running and
managing software systems [18]. A consequence of this con-
tinuous evolution is that software systems must become more
Th
is roa
d map paper is a result of the Dagstuhl Seminar
08031 on “Software Engineering for Self-Adaptive Systems”
in January 2008.
versatile, flexible, resilient, dependable, robust, energy-efficient,
recoverable, customizable, configurable, or self-optimizing
by adapting to changing operational contexts and environ-
ments. The complexity of current software-based systems
has led the software engineering community to look for in-
spiration in diverse related fields (e.g., robotics, artificial in-
telligence) as well as other areas (e.g., biology) to find new
ways of designing and managing systems and services. In
this endeavour, the capability of the system to adjust its
behaviour in response to its perception of the environment
and the system itself in form of self-adaptation has become
one of the most promising directions.
The topic of self-adaptive systems has been studied within
the different research areas of software engineering, includ-
ing, requirements engineering, software architectures, mid-
dleware, component-based development, and programming
languages, however most of these initiatives have been iso-
lated and until recently without a formal forum for dis-
cussing its diverse facets. Other research communities that
have also investigated this topic from their own perspec-
tive are even more diverse: fault-tolerant computing, dis-
tributed systems, biologically inspired computing, distrib-
uted artificial intelligence, integrated management, robotics,
knowledge-based systems, machine learning, control theory,
etc. In addition, research in several application areas and
technologies has grown in importance, for example, adapt-
able user interfaces, autonomic computing, dependable com-
puting, embedded systems, mobile ad hoc networks, mobile
and autonomous robots, multi-agent systems, peer-to-peer
applications, sensor networks, service-oriented architectures,
and ubiquitous computing.
It is important to emphasise that in all the above ini-
tiatives the common element that enables the provision of
self-adaptability is software because of its flexible nature.
However, the proper realization of the self-adaptation func-
tionality still remains a significant intellectual challenge, and
only recently have the first attempts in building self-adaptive
systems emerged within specific application domains. More-
over, little endeavour has been made to establish suitable
software engineering approaches for the provision of self-
adaptation. In the long run, we need to establish the foun-
Dagstuhl Seminar Proceedings 08031
Software Engineering for Self-Adaptive Systems
http://drops.dagstuhl.de/opus/volltexte/2008/1500
1

dat
ions that enable the systematic development of future
generations of self-adaptive systems. Therefore it is worth-
while to identify the commonalities and differences of the
results achieved so far in the different fields and look for
ways to integrate them.
The development of self-adaptive systems can be viewed
from two perspectives, either top-down when considering an
individual system, or bottom-up when considering cooper-
ative systems. Top-down self-adaptive systems assess their
own behaviour and change it when the assessment indicates
a need to adapt due to evolving functional or non-functional
requirements. Such systems typically operate with an ex-
plicit internal representation of themselves and their global
goals. In contrast, bottom-up self-adaptive systems (self-
organizing systems) are composed of a large number of com-
ponents that interact locally according to simple rules. The
global behaviour of the system emerges from these local in-
teractions.
1
The global behaviour of the system emerges
from these local interactions, and it is difficult to deduce
properties of the global system by studying only the lo-
cal properties of its parts. Such systems do not necessar-
ily use internal representations of global properties or goals;
they are often inspired by biological or sociological phenom-
ena. The two cases of self-adaptive behaviour in the form
of individual and cooperative self-adaptation are two ex-
treme poles. In practice, the line between both is rather
blurred, and compromises will often lead to an engineering
approach incorporating representatives from these two ex-
treme poles. For example, ultra large-scale systems need
both top-down self-adaptive and bottom-up self-adaptive
characteristics (e.g., the Web is basically decentralized as
a global system but local sub-webs are highly centralized).
However, from the perspective of software development the
major challenge is how to accommodate in a systematic en-
gineering approach traditional top-down approaches with
bottom-up approaches.
The goal of this road map paper is to summarize and
point out the current state-of-the-art, its limitations, and
identify critical challenges for the software engineering of
self-adaptive systems. Specifically, we intend to focus on
development methods, techniques, and tools that seem to be
required to support the systematic development of complex
software systems with dynamic self-adaptive behaviour. In
contrast to merely speculative and conjectural visions and
ad hoc approaches for systems supporting self-adaptability,
the objective of this paper is to establish a road map for
research and identify the main research challenges for the
systematic software engineering of self-adaptive systems.
To present and motivate these challenges, the paper is
structured using the four views which have been identified
as essential. Each of these views are roughly presented in
terms of the state of the art and the challenges ahead. We
1
I
n
context of biologically inspired systems usually self-
organization rather than self-adaptation is used and similar
to our initial characterization we distinguish “strong self-
organizing systems,” which are those systems where there is
no explicit central control either internal or external (bot-
tom up); from “weak self-organizing systems,” which are
those systems where, from an internal point of view, there
is re-organization maybe under an internal (central) con-
trol or planning (top-down). Strong self-organizing systems
are thus purely decentralized, access to global information
is limited or impossible, interactions occur locally (among
neighbours) and based on local information [13].
first review the state of the art and needs concerning require-
ments (Section 2). Then, the relevant modelling dimensions
are discussed in Section 3 before we discuss the engineering
of self-adaptive systems in Section 4. The considerations
are completed by looking into the current achievements and
needs for assurance in the context of self-adaptive systems
in Section 5. Finally, the findings are summarized in Section
6 in terms of lessons learned and future challenges.
2. REQUIREMENTS
A self-adaptive system is able to modify its behaviour ac-
cording to changes in its environment. As such, a self-
adaptive system must continuously monitor changes in its
context and react accordingly. But what aspects of the envi-
ronment should the self-adaptive system monitor? Clearly,
the system cannot monitor everything. And exactly what
should the system do if it detects a less than optimal pat-
tern in the environment? Presumably, the system still needs
to maintain a set of high-level goals that should be main-
tained regardless of the environmental conditions. But non-
critical goals could well be relaxed, thus allowing the system
a degree of flexibility during or after adaptation.
These questions (and others) form the core considerations
for building self-adaptive systems. Requirements engineer-
ing is concerned with what a system ought to do and within
which constraints it must do it. Requirements engineer-
ing for self-adaptive systems, therefore, must address what
adaptations are possible and what constrains how those adap-
tations are carried out. In particular, questions to b e ad-
dressed include: what aspects of the environment are rel-
evant for adaptation? Which requirements are allowed to
vary or evolve at runtime and which must always be main-
tained? In short, requirements engineering for self-adaptive
systems must deal with uncertainty because the expecta-
tions on the environment frequently vary over time.
2.1 State of the Art
Requirements engineering for self-adaptive systems ap-
pears to b e a wide open research area with only a limited
number of approaches yet considered. Cheng and Atlee [7]
report on some previous work on specifying and verifying
adaptive software, and on run-time monitoring of require-
ments conformance [19, 42]. They also explain how prelim-
inary work on personalized and customized software can be
applied to adaptive systems (e.g., [47, 31]). In addition,
some research approaches have successfully used goal mod-
els as a foundation for specifying the autonomic behaviour
[29] and requirements of adaptive systems [22].
One of the main challenges that self-adaptation poses is
that when designing a self-adaptive system, we cannot as-
sume that all adaptations are known in advance that is,
we cannot anticipate requirements for the entire set of pos-
sible environmental conditions and their respective adapta-
tion specifications. For example, if a system is to resp ond
to cyber-attacks, one cannot possibly know all attacks in
advance since malicious actors develop new attack types all
the time.
As a result, requirements for self-adaptive systems may
involve degrees of uncertainty or may necessarily be specified
as “incomplete”. The requirements specification therefore
should cope with:
the incomplete information about the environment and
2

the
resulting incomplete information about the respec-
tive behaviour that the system should expose
the evolution of the requirements at runtime
2.2 Research Challenges
This subsection highlights a number of short-term and
long-term research challenges for requirements engineering
for self-adaptive systems. We start with shorter-term chal-
lenges and progress to more visionary ideas. As far as the
authors are aware, there is little or no research currently
underway to address these challenges.
A new requirements language. Current languages for
requirements engineering are not well suited to dealing with
uncertainty, which, as mentioned above, is a key consider-
ation for self-adaptive systems. We therefore propose that
richer requirements languages are needed. Few of the ex-
isting approaches for requirements engineering provide this
capability. In goal-modelling notations such as KAOS [11]
and i* [51], there is no explicit support for uncertainty or
adaptivity. Scenario-based notations generally do not pro-
vide any supp ort either although live sequence charts (LSCs)
[24] have a notion of mandatory versus potential behaviour
which could possibly be used in specifying adaptive systems.
Of course, the most common notation for specifying require-
ments in industry is still using natural language prose. Tra-
ditionally, requirements documents make statements such
as “the system shall do this. For self-adaptive systems, the
prescriptive notion of “shall” needs to be relaxed and could,
for example, be replaced with “the system may do this or it
may do that” or “if the system cannot do this, then it should
eventually do that.” This idea leads to a new requirements
vocabulary for self-adaptive systems that gives stakeholders
the flexibility to account for uncertainty in their require-
ments documents. For example:
Traditional RE:
the system shall do this ... .
Adaptive RE:
the system might do this ... .
But it may do this...” ... “as long as it does this ... .
the system ought to do this... .” but “if it cannot, it
shall eventually do this ...
Such a vocabulary would change the level of discourse
in requirements from prescriptive to flexible. There would
need to be a clear definition of terms, of course, as well as
a composition calculus for defining how the terms relate to
each other and compose. Multimo dal logics and p erhaps
new adaptation-oriented logics [53] need to be developed to
specify the semantics for what it means to have the “possi-
bility” of conditions [17, 40]. There is also a relationship
with variability management mechanisms in software prod-
uct lines [48], which also tackle built-in flexibilities. How-
ever, at the requirements level, one ideally would capture
uncertainty at a more abstract level than simply enumerat-
ing alternatives.
Mapping to architecture. Given a new requirements lan-
guage that explicitly handles uncertainty, it will be necessary
to provide systematic methods for refining models in this
language down to specific architectures that support run-
time adaptation. There are a variety of technical options for
implementing reconfigurability at the architecture level, in-
cluding component-based, aspect-oriented and product-line
based approaches, as well as combinations of these. Poten-
tially, there could be a large gap in expressiveness between
a requirements language that incorporates uncertainty and
these existing architecture structuring methods. One can
imagine, therefore, a semi-automated process for mapping
to architecture where heuristics and/or patterns are used to
suggest architectural units corresponding to certain vocab-
ulary terms in the requirements.
Managing uncertainty. In general, once we start intro-
ducing uncertainty into our software engineering processes,
we must have a way of managing this uncertainty and the
inevitable complexity associated with handling so many un-
knowns. Certain requirements will not change (i.e., invari-
ants), whereas others will permit a degree of flexibility. For
example, a system cannot start out as a transport robot
and self-adapt into a robot chef! Allowing uncertainty lev-
els when developing self-adaptive systems requires a trade-
off between flexibility and assurance such that the critical
high-level goals of the application are always met [52, 39,
28].
Requirements reflection. As said above, self-adaptation
deals with requirements that vary at runtime. Therefore it
is important that requirements lend themselves to b e dy-
namically observed, i.e., during execution. Reflection [34],
[27], [10] enables a system to observe its own structure and
behaviour. A relevant research work is the ReqMon tools
[38] which provides a requirements monitoring framework,
focusing on temporal properties to be maintained. Leverag-
ing and extending beyond these complementary approaches,
Finkelstein [20] coins the term “requirements reflection” that
would enable systems to be aware of their own requirements
at runtime. This would require an appropriate model of the
requirements to be available online. Such an idea brings with
it a host of interesting research questions, such as: Could
a system dynamically observe its requirements? In other
words, can we make requirements runtime objects? Future
work is needed to examine how technologies may provide the
infrastructure to do this.
Online goal refinement. As in the case of design de-
cisions that are eventually realized at runtime, new and
more flexible requirement specifications like the one sug-
gested above would imply that the system should perform
the RE processes at runtime, e.g. goal-refinement [28].
Traceability from requirements to implementation.
A constant challenge in all the topics shown above is “dy-
namic” traceability. For example, new operators of a new
RE specification language should be easily traceable down
to architecture, design, and beyond. Furthermore, if the RE
process is performed at runtime we need to assure that the
final implementation or behaviour of the system matches
the requirements. Doing so is different from the traditional
requirements traceability.
2.3 Final Remarks
In this section, we have presented several important re-
search challenges that the requirements engineering com-
munity will face as the demand for self-adaptive systems
continues to grow. These challenges span RE activities dur-
ing the development phases and runtime. In order to gain
3

assu
rance about adaptive behaviour, it is important to mon-
itor adherence and traceability to the requirements during
runtime. Furthermore, it is also necessary to acknowledge
and support the evolution of requirements at runtime. Given
the increasing complexity of applications requiring runtime
adaptation, the software artifacts with which the developers
manipulate and analyze must be more abstract than source
code. How can graphical models, formal specifications, poli-
cies, etc. be used as the basis for the evolutionary process
of adaptive systems versus source code, the traditional arti-
fact that is manipulated once a system has been deployed?
How can we maintain traceability among relevant artifacts,
including the code? How can we maintain assurance con-
straints during and after adaptation? How much should a
system be allowed to adapt and still maintain traceability
to the original system? Clearly, the ability to dynamically
adapt systems at runtime is an exciting and powerful ca-
pability. The RE community, among other software engi-
neering disciplines, need to be proactive in tackling these
complex challenges in order to ensure that useful and safe
adaptive capabilities are provided to the adaptive systems
developers.
3. MODELLING
Endowing a system with a self-adaptive property can take
many different shapes. The way self-adaptation has to be
conceived depends on various aspects, such as, user needs,
environment characteristics, and other system properties.
Understanding the problem and selecting a suitable solution
requires precise models for representing important aspects of
the self-adaptive system, its users, and its environment.
In this section, we provide a classification of modelling
dimensions for self-adaptive systems. Each dimension de-
scribes a particular aspect of the system that is relevant for
self-adaptation. Note that it is not our ambition to be ex-
haustive in all possible dimensions, but rather to give an ini-
tial impetus towards defining a framework for modelling self-
adaptive systems. Some of these dimensions could equally
be applied to the environment and the users of the system
(in addition to other specific dimensions), but here we have
focused on the system itself.
For the identification of the system modelling dimensions,
two perspectives were considered: the abstraction levels as-
sociated with the system, and the activities associated with
the adaptation. The first perspective refers to the require-
ments (e.g., goals), the design (e.g., architecture), and the
code of the software system, and the second refers to the key
activities of the feedback control loop, i.e., collect, analyse,
decide, and act.
In the following, we present the dimensions in term of
three groups. First, we introduce the modelling dimensions
that can be associated with the adaptation activities of the
feedback control loop, giving special emphasis to decision
making. The other two groups are related to non-functional
properties, i.e., timing and dependability, that are particu-
larly relevant to some classes of self-adaptive systems. The
proposed modelling framework is presented in the context
of an illustrative case from the class of embedded systems,
however, these dimensions were equally useful in describing
the self-adaptation properties of an IT change management
system.
3.1 Illustrative Case
As an illustrative scenario, we consider the problem of ob-
stacle/vehicle collisions in the domain of unmanned vehicles
(UVs). A concrete application could be the DARPA Grand
Challenge contest [44]. Each UV is provided with an au-
tonomous control software system (ACS) to drive the vehicle
from start to destination along the road network. The ACS
takes into account the regular traffic environment, including
the traffic infrastructure and other vehicles. The scenario
we envision is the one in which there is a UV driving on the
road through a region where people and animals can cross
the road unexpectedly. To anticipate possible collisions, the
ACS is extended with a self-adaptable control system (SCS).
The SCS monitors the environment and controls the vehicle
when a human being or an animal is detected in front of the
vehicle. In case an obstacle is detected, the SCS manoeu-
vres the UV around the obstacle negotiating other obstacles
and vehicles. Thus, the SCS extends the ACS with self-
adaptation to avoid collisions with obstacles on the road.
3.2 Overview of Modelling Dimensions
We give overview of the important modelling dimensions
per group. Each dimension is illustrated with an example
from the illustrative case.
Adaptation
The first group describes the modelling dimensions related
to adaptation.
Type of adaptability. The type of adaptability refers
to the particular kind of adaptation applied. The domain
of type of adaptability ranges from parametric to composi-
tional. Self-adaptivity can be realized by simple local para-
metric changes of a system component, for example, or it
can involve major architectural level structural changes. In
the illustrative case, to avoid collisions with obstacles, the
SCS has to adjust the movements of the UV, and this might
imply adjusting parameters in the steering gear.
Degree of automation. The automation dimension refers
to the degree of human intervention required for self-adaptation.
The domain of degree of automation ranges from autonomous
to human-based. Adaptive systems may be fully automatic
requiring no human intervention, or the system may require
human decision making, or at least confirmation or approval.
In the illustrative example, the UV has to avoid collisions
with animals without any human intervention.
Form of organization. The form of organization refers to
the type of organization used to realize self-adaptation. The
domain of form of organization ranges from weak (or central-
ized) to strong (or decentralized). In a strong organization,
the behaviour of comp onents reflect their local environment,
there is no global model of the system. Driven by changing
requirements, the components change their structure or be-
haviour to self-adapt the system. This self-organizing form
of self-adaptation can be collaborative, market-based, and so
on. In a weak organization, adaptation is achieved through a
global system model, which incorporates a feedback control
loop, for example. A self-adaptive subsystem monitors the
base system possibly maintaining an explicit representation
of the system, and based on a set of high-level goals, the
structure or b ehaviour of the system is adapted. Section 4
elaborates on the different forms of organization to realize
self-adaptation. The SCS of the UV in the illustrative ex-
ample seems to fit naturally with a weak organization.
4

Te
chniques for adaptability. Techniques for adaptabil-
ity refer to the way self-adaptation is accomplished. The do-
main of techniques for adaptability ranges from data-oriented
to process-oriented [46]. In a data-oriented approach, the
system is characterised as acted upon, by providing the cri-
teria for identifying objects, often by modelling the objects
themselves. In a process-oriented approach, the system is
characterised as sensed, by providing the means for produc-
ing or generating objects having the desired characteristics.
In the illustrative case, the SCS will monitor the environ-
ment for obstacles that suddenly appear in front of the vehi-
cle and subsequently guide the vehicle around the obstacle
to avoid a collision. To realize this form of self-adaptability,
the SCS senses the environment of the UV, and depending
on the controller, which is part of the system model, it pro-
duces the appropriate system output.
Place of change. The place of change refers to the loca-
tion where self-adaptation takes place. The domain of place
of change includes the values application, middleware, or in-
frastructure. Self-adaptation can be realized by monitoring
and adapting the application logic, the supporting middle-
ware, or the infrastructure that defines the system. In the
illustrative case, self-adaptation is realized by the SCS that
is part of the application logic.
Abstraction of adaptability. This modelling dimension
refers to abstraction level at which self-adaptation is ap-
plied. The domain of abstraction of adaptability refers to
requirements, design, and implementation, and their respec-
tive products, for example, goals, architectures and code.
An example of adaptation at the design level is the dynamic
reconfiguration of the system architecture. Another exam-
ple of adaptation at the design level can be the selection of
an alternative algorithm. An example of adaptation at the
level of code is dynamic weaving of additional code. To avoid
collisions, the SCS may pass particular control information
to the ACS which seems to fit best at the abstraction level
of design.
Impact of adaptability. This modelling dimension refers
to the impact that adaptation might have up on the system.
The domain of impact of adaptability ranges from specific
to generic. Adaptability is specific if it affects a particular
component or part of the system. On the other hand, if the
adaptability affects the whole system, its impact is generic.
In the illustrative case, if the the steering gear fails, the self-
adaptation would be generic since collision avoidance affects
the overall system’s behaviour.
Trigger of adaptability. This modelling dimension refers
whether the agent of change is either internal or external to
the system. A failure in a system component is considered
as an internal trigger for reconfiguring the system structure
or changes the services it provides, while the existence of an
obstacle is an external trigger since the system has to change
its behaviour in order to avoid a collision.
In addition to the above modelling dimensions that can be
applied to the system as a whole, there are some dimensions
related specifically to the key activities of the feedback con-
trol loop. In the following, we present two of that modelling
dimensions that are related to decision making.
Degree of decision making. The degree of decision mak-
ing expresses to what extent self-adaptation is defined in
advance. The domain ranges from static (or pre-defined) to
dynamic (or run-time). For static decision making, the sce-
narios of self-adaptation are exhaustively defined before the
system is deployed. For dynamic decision making, the deci-
sion of self-adaptation will be made during execution based
on a set of high-level goals. In the illustrative example, the
SCS monitors the environment and decides at run-time when
it has to take control over the ACS to avoid collisions.
Techniques for decision making. This modelling dimen-
sion refers to the procedures and methods used to determine
when to apply self-adaptation. Values of the domain of tech-
niques for decision making are utility functions, case-based
reasoning, etc. The SCS will likely use a reasoning-like ap-
proach to determine when the vehicle is in collision range
with an obstacle.
Timing
The second group describes modelling dimensions related to
timing issues.
Responsiveness. The responsiveness of self-adaptation re-
lates to the answering or replying of the self-adaptation. The
domain ranges from guaranteed to best-effort. For critical
scenarios, self-adaptation is required to be guaranteed, how-
ever, in less-critical situations, best-effort will suffice. In the
illustrative example, the SCS must guarantee that the UV
reacts effectively to avoid collisions with possibly a human
being.
Performance. The performance dimension refers to the de-
gree of predictability of self-adaptation. The domain ranges
from predictable to degradable. In time-critical cases, the
self-adaptable system often needs to act in a highly pre-
dictable manner. In other cases, a graceful degradation of
the system is acceptable. In the illustrative case, when an
obstacle appears, the SCS will manoeuvre the UV in such a
way that a collision should be avoided. In order to accom-
plish this task predictably, other system tasks might have
their performance affected.
Triggering. The triggering dimension of self-adaptation
refers to the initiation of the adaptation process. The do-
main of triggering ranges from event to time. The cause for
self-adaptation is event triggered when the process is ini-
tiated whenever there is a significant change in the state,
i.e., an event. The cause for self-adaptation is time trig-
gered when the process is initiated at predetermined points
in time. Obstacles in the illustrative case appear unexp ect-
edly and as such triggering of self-adaptation is event-based.
Dependability
The third and final group we consider describes modelling
dimensions related to dependability, that is, the ability of
a system to deliver a service that can justifiably be trusted
[1].
Reliability, availability, confidentiality. Reliability, avail-
ability, and confidentiality are attributes of dependability.
The domain of each of these properties ranges from high to
low. In the illustrative case, the reliability of the SCS avoid-
ing a collision is expected to be high.
Safety. The safety dimension refers to absence of catastrophic
consequences on the user and the environment, which can be
caused by the self-adaptation. The domain of safety ranges
5

Citations
More filters
Proceedings ArticleDOI

Adaptive security and privacy in smart grids: A software engineering vision

TL;DR: The need for adaptive security and privacy in smart grids is discussed by presenting some motivating scenarios and some research issues that arise in engineering adaptive security, and published reports by NIST are scrutinized.
Proceedings ArticleDOI

A discipline-spanning development process for self-adaptive mechatronic systems

TL;DR: This paper presents a development process for self-adaptive mechatronic systems which particularly addresses the integration between the disciplines concerned with the development of software, namely control and software engineering.
Proceedings ArticleDOI

An Approach to Model-Based Development of Context-Aware Adaptive Systems

TL;DR: A novel approach to modeling and realizing context-aware adaptive software systems that explicitly separates but relates the context model and the system model, so that their relationships, changes, and change impacts across the system and its contexts can be clearly captured and managed.
Book ChapterDOI

QoS contract-aware reconfiguration of component architectures using e-graphs

TL;DR: This paper focuses on the formalization of component-based architecture self-reconfiguration as an action associated to quality-of-service (QoS) contracts violation, and forms the basis of the framework to enable a system to preserve its QoS contracts.
Book ChapterDOI

Requirements and Assessment of Languages and Frameworks for Adaptation Models

TL;DR: In this article, the authors investigate requirements for adaptation models that specify the analysis, decision-making, and planning of adaptation as part of a feedback loop, and assess two existing approaches to adaptation models concerning their fitness for the requirements discussed in this paper.
References
More filters
Book

System identification

Book

Modern Control Systems

TL;DR: This book presents a control engineering methodology that, while based on mathematical fundamentals, stresses physical system modeling and practical control system designs with realistic system specifications.
Journal ArticleDOI

Goal-directed requirements acquisition

TL;DR: An approach to requirements acquisition is presented which is driven by higher-level concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc.
Proceedings ArticleDOI

Towards modelling and reasoning support for early-phase requirements engineering

TL;DR: This paper argues that a different kind of modelling and reasoning support is needed for the early phase of requirements engineering, which aims to model and analyze stakeholder interests and how they might be addressed, or compromised, by various system-and-environment alternatives.
Related Papers (5)
Frequently Asked Questions (10)
Q1. What have the authors contributed in "Software engineering for self-adaptive systems: a research road map (draft version)∗" ?

In this paper, the authors present research road map for software engineering of selfadaptive systems focusing on four views, which they identify as essential: requirements, modelling, engineering, and assurances. 

Some of the fields have been mentioned in this paper, like, control theory, but other fields from which software engineering might get some inspiration for the development of self-adaptive systems are, decision theory, non-classic computation, and computer networks. 

Technologies like, model driven development, aspect-oriented programming, and software product lines might offer new opportunities in the development of self-adaptive systems, and change the processes by which these systems are developed. 

Another typical scheme from control engineering is organizing multiple control loops in the form of a hierarchy where, due to the employed different time periods, unexpected interference between the levels can be excluded. 

Because of the separation of concerns (i.e., model reference, adaptive algorithm, controller and process), this solution is a solid starting point for the design of self-adaptive software-intensive systems. 

There are a variety of technical options forimplementing reconfigurability at the architecture level, including component-based, aspect-oriented and product-line based approaches, as well as combinations of these. 

Garlan and Schmerl also advocate to make self-adaptation external, as opposed to be internal or hard-wired, to separate the concerns of system functionality from the concerns of self-adaptation [8]. 

Due to this high dynamism, V&V methods traditionally applied at requirements and design stages of development must be supplemented with run-time assurance techniques. 

Other research communities that have also investigated this topic from their own perspective are even more diverse: fault-tolerant computing, distributed systems, biologically inspired computing, distributed artificial intelligence, integrated management, robotics, knowledge-based systems, machine learning, control theory, etc. 

It can be also argued that current state-of-the-art engineering practices are not sufficiently mature to warrant self-adaptive functionality.