scispace - formally typeset
Search or ask a question
Book ChapterDOI

Software Engineering for Self-Adaptive Systems: A Research Roadmap

TL;DR: The goal of this roadmap paper is to summarize the state-of-the-art and to identify critical challenges for the systematic software engineering of self-adaptive systems.
Abstract: The goal of this roadmap paper is to summarize the state-of-the-art and to identify critical challenges for the systematic software engineering of self-adaptive systems. The paper is partitioned into four parts, one for each of the identified essential views of self-adaptation: modelling dimensions, requirements, engineering, and assurances. For each view, we present the state-of-the-art and the challenges that our community must address. This roadmap paper is a result of the Dagstuhl Seminar 08031 on "Software Engineering for Self-Adaptive Systems," which took place in January 2008.

Summary (6 min read)

1. INTRODUCTION

  • The simultaneous explosion of information, the integration of technology, and the continuous evolution from softwareintensive systems to ultra-large-scale (ULS) systems requires new and innovative approaches for building, running and managing software systems [18] .
  • A consequence of this continuous evolution is that software systems must become more versatile, flexible, resilient, dependable, robust, energy-efficient, recoverable, customizable, configurable, or self-optimizing by adapting to changing operational contexts and environments.
  • The proper realization of the self-adaptation functionality still remains a significant intellectual challenge, and only recently have the first attempts in building self-adaptive systems emerged within specific application domains.
  • The global behaviour of the system emerges from these local interactions.
  • Each of these views are roughly presented in terms of the state of the art and the challenges ahead.

2. REQUIREMENTS

  • As such, a selfadaptive system must continuously monitor changes in its context and react accordingly.
  • But noncritical goals could well be relaxed, thus allowing the system a degree of flexibility during or after adaptation.
  • Requirements engineering for self-adaptive systems, therefore, must address what adaptations are possible and what constrains how those adaptations are carried out.

2.1 State of the Art

  • Requirements engineering for self-adaptive systems appears to be a wide open research area with only a limited number of approaches yet considered.
  • The authors therefore propose that richer requirements languages are needed.
  • For self-adaptive systems, the prescriptive notion of "shall" needs to be relaxed and could, for example, be replaced with "the system may do this or it may do that" or "if the system cannot do this, then it should eventually do that.".
  • This idea leads to a new requirements vocabulary for self-adaptive systems that gives stakeholders the flexibility to account for uncertainty in their requirements documents.

Traceability from requirements to implementation.

  • A constant challenge in all the topics shown above is "dynamic" traceability.
  • New operators of a new RE specification language should be easily traceable down to architecture, design, and beyond.
  • Doing so is different from the traditional requirements traceability.

2.3 Final Remarks

  • The authors have presented several important research challenges that the requirements engineering community will face as the demand for self-adaptive systems continues to grow.
  • These challenges span RE activities during the development phases and runtime.
  • In order to gain assurance about adaptive behaviour, it is important to monitor adherence and traceability to the requirements during runtime.
  • Given the increasing complexity of applications requiring runtime adaptation, the software artifacts with which the developers manipulate and analyze must be more abstract than source code.

3. MODELLING

  • Endowing a system with a self-adaptive property can take many different shapes.
  • Understanding the problem and selecting a suitable solution requires precise models for representing important aspects of the self-adaptive system, its users, and its environment.
  • For the identification of the system modelling dimensions, two perspectives were considered: the abstraction levels associated with the system, and the activities associated with the adaptation.
  • The first perspective refers to the requirements (e.g., goals), the design (e.g., architecture), and the code of the software system, and the second refers to the key activities of the feedback control loop, i.e., collect, analyse, decide, and act.
  • The other two groups are related to non-functional properties, i.e., timing and dependability, that are particularly relevant to some classes of self-adaptive systems.

3.1 Illustrative Case

  • A concrete application could be the DARPA Grand Challenge contest [44] .
  • The ACS takes into account the regular traffic environment, including the traffic infrastructure and other vehicles.
  • The scenario the authors envision is the one in which there is a UV driving on the road through a region where people and animals can cross the road unexpectedly.
  • To anticipate possible collisions, the ACS is extended with a self-adaptable control system (SCS).
  • In case an obstacle is detected, the SCS manoeuvres the UV around the obstacle negotiating other obstacles and vehicles.

3.2 Overview of Modelling Dimensions

  • The authors give overview of the important modelling dimensions per group.
  • Each dimension is illustrated with an example from the illustrative case.

Adaptation

  • The first group describes the modelling dimensions related to adaptation.
  • The domain of type of adaptability ranges from parametric to compositional.
  • It is imperative for the software engineering community to develop better models that incorporate the AI techniques in solving the practical problems of automatic adaptive systems.
  • Thereby, if applied to a large-scale software system, almost all current techniques suffer from scalability problems.
  • Principled approaches for efficient gathering of information at run-time are needed.

Degree of automation. The automation dimension refers

  • To the degree of human intervention required for self-adaptation.
  • In a process-oriented approach, the system is characterised as sensed, by providing the means for producing or generating objects having the desired characteristics.
  • In the illustrative case, self-adaptation is realized by the SCS that is part of the application logic.
  • This modelling dimension refers to the impact that adaptation might have upon the system.
  • In the illustrative example, the SCS monitors the environment and decides at run-time when it has to take control over the ACS to avoid collisions.

Timing

  • The second group describes modelling dimensions related to timing issues.
  • In the illustrative example, the SCS must guarantee that the UV reacts effectively to avoid collisions with possibly a human being.
  • The domain ranges from predictable to degradable.
  • In time-critical cases, the self-adaptable system often needs to act in a highly predictable manner.
  • Monitoring a system, especially when there are several different QoS properties of interest, has an overhead.

Dependability

  • The third and final group the authors consider describes modelling dimensions related to dependability, that is, the ability of a system to deliver a service that can justifiably be trusted [1] .
  • The domain of each of these properties ranges from high to low.
  • The domain of maintainability ranges from autonomous to humanbased.
  • Since the illustrative example is related to an embedded real-time system, the data integrity will be short-term.

3.3 Challenges Ahead

  • In spite of the many years of software engineering research, construction of self-adaptive software systems has remained a very challenging task.
  • The discussion is structured in line with the three presented groups of modelling dimensions.

4. ENGINEERING

  • Building self-adaptive software systems cost-effectively and in a predictable manner is a major engineering challenge even though adaptive systems have a long history with huge successes in many different branches of engineering [49, 16] .
  • Mining the rich experiences in these fields, borrowing theories from control engineering, and then applying the findings to software-intensive adaptive systems is a most worthwhile and promising avenue of research.
  • Lehman's work on software evolution [30] has shown that "[t]he software process constitutes a multilevel, multiloop feedback system and must be treated as such if major progress in its planning, control, and improvement is to be achieved.".
  • Therefore, any attempt to automate parts of these processes such as self-adaptive systems necessarily also has to consider feedback loops.
  • Therefore, the authors advocate to focus on the feedback loop-a concept that is elevated to a first-class entity in control engineering-when engineering self-adaptive software systems.

4.1 State of the Art & Feedback Loops

  • Self-adaptation in software-intensive systems comes in many different guises.
  • The reasoning typically involves feedback processes with four key activities: collect, analyze, decide, and act as depicted in Figure 1 [14] .
  • Keeping web services up and running for a long time requires collecting of information that reflects the current state of the system, analyzing of that information to diagnose performance problems or to detect failures, deciding how to resolve the problem (e.g., via dynamic load-balancing or healing), and acting to effect the made decision.
  • The authors have observed that feedback loops are often hidden, abstracted, or internalized when presenting the architecture of self-adaptive systems [36] .
  • Therefore, besides making the control loops explicit, the control loops' properties have to be made explicit as well.

Generic Control Loop Model

  • The generic model of a control loop presented in Figure 1 provides a good overview of the main activities around the feedback loop, but ignores many properties of the control and data flow around the loop.
  • Next, the system analyzes the collected data.
  • Some of the applicable questions here are:.
  • The above questions -and many others -regarding the control loop should be explicitly identified, recorded, and resolved during the development of the self-adaptive system.

Control Theory

  • The control loop is a central element of control theory, which provides well-established mathematical models, tools, and techniques to analyze system performance, stability, sensitivity, or correctness [6, 15] .
  • Researchers have applied results of control theory and engineering when building selfadaptive systems.
  • Good engineering practice calls for reducing multiple control loops to a single one, or making control loops independent of each other [37] .
  • If this is not possible, the design must make the interactions of control loops explicit and expose how these interactions are handled.
  • Mining the experiences in these fields and applying them to software-intensive adaptive systems is a most worthwhile next step.

Specific Control Loop Models

  • Another key observation that the authors made is that different application areas introduce different nomenclature and architectural diagrams for their realization of the generic feedback loop depicted in Figure 1 .
  • Control engineering leverages the Model Reference Adaptive Control (MRAC) solution to describe many kinds of feedback-based systems (e.g., flight control) [16] .
  • Further, the systems are often decentralized in such a way that the agents do not have a sense of the global goal but rather it is the interaction of their local behavior that yields the global goal as an emergent property.
  • An example of a self-organizing biologically inspired software system is a distributed computational system built using the tile architectural style from [3] .
  • In an attempt to unify the self-adaptive (top-down) and self-organising (bottom-up) views, [12] propose a software architecture based on the use of metadata and policies where adaptation properties and feedback loop reasoning are considered explicitly both at design-time and run-time.

4.2 Challenges Ahead

  • The authors have argued that the control loop should be a firstclass entity when thinking about the engineering of selfadaptive systems.
  • The authors believe that understanding and reasoning about the control loop is key for advancing the construction of self-adaptive systems from an ad-hoc, trial-anderror endeavor towards a more disciplined approach.
  • For such systems the control loop seems implicitly present.
  • Examples of such properties are system state that is used to reason about the system's behavior, and policies and business goals that govern and constrain how the system will and can adapt.
  • Furthermore, users might want feedback from the system about the information collected by sensors and how this information is used to adapt the system.

5. ASSURANCES

  • Developers need to provide evidence that the set of stated functional and nonfunctional properties are satisfied during system's operation.
  • Tra-ditional verification and validation methods, static or dynamic, rely of stable descriptions of software models and properties.
  • Current verification and validation methods do not align well with changing goals and requirements as well as variable software functionality.
  • Novel verification and validation (V&V) methods are required to provide assurance in self-adaptive systems.
  • Thereafter, the authors present a set of research challenges for V&V methods implied by the presented framework.

5.1 Framework

  • Over a period of operation, the system operates through a series of operational modes.
  • Sequences of behavioral adjustments in the known modes are known.
  • Goals and requirements of a self-adaptive system may also change during run-time.
  • Properties can also be related to each other.

5.2 Challenges

  • Self-adapting systems have to contend with dynamic changes in modes and contexts as well as the dynamic changes in user requirements.
  • When formal property proofs do not seem feasible, run-time assurance techniques may rely on demonstrable properties of adaptation, like convergence and stability.
  • Software vendors may have a difficult time to argue that they applied the expected care when developing a critical application if the software is self-adaptive.
  • Software may enter unforeseeable states that have never been tested or reasoned about.

6. LESSONS AND CHALLENGES

  • The authors present the overall conclusions of the road map paper in the context of lessons learned and what were the major challenges identified.
  • First and foremost, this exercise had no intention of being exhaustive.
  • In the following, for each of the four views, the authors presented some of the identified challenges.
  • The major challenge here is the definition of a new requirements language that would be able to capture uncertainty at a more abstract level.
  • Considering requirements might vary at run-time, systems should be made aware of their own requirements, hence the need of "requirements reflection" and online goal refinement.

Modelling.

  • The more precise the models are, the more effective they should be in supporting run-time analysis and decision process.
  • At the same time models should be sufficiently simple, otherwise synthesis might become unfeasible.
  • The definition of utility functions for supporting decision making is a challenging task, and practical techniques are needed to specify and generate these utility functions.
  • Once loops become more explicit, it becomes much easier to reify properties, so they can be queried and modified at run-time.
  • For facilitating the reasoning between system properties and its control loops, reference architectures should be defined that highlight key aspects of these loops, such as, number, structural arrangements, interactions and stability conditions.

Assurances.

  • The major challenge here is to supplement traditional methods applied at requirements and design stages of development with run time assurances.
  • There are uncertainties associated with this process, hence probabilistic approaches is a promising research direction.
  • This might be achieved with adaptation-specific model driven environments.
  • The authors can conclude that all four theses refer to new challenges the software engineering of self-adaptive systems has to face which result from the dynamics of adaptation.
  • This dynamics requires that well proven principles and techniques valid for standard software engineering have to be questioned and new solutions have to be considered.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Software Engineering for
Self-Adaptive Systems:
A Research Road Map
(Draft Version)
Betty H.C. Cheng, Rogério de Lemos, Holger Giese, Paola Inverardi, Jeff Magee
(Dagstuhl Seminar Organizer Authors)
Jesper Andersson, Basil Becker, Nelly Bencomo, Yuriy Brun, Bojan Cukic, Giovanna
Di Marzo Serugendo, Schahram Dustdar, Anthony Finkelstein, Cristina Gacek, Kurt
Geihs, Vincenzo Grassi, Gabor Karsai, Holger Kienle, Jeff Kramer, Marin Litoiu, Sam
Malek, Raffaela Mirandola, Hausi Müller, Sooyong Park, Mary Shaw, Matthias Tichy,
Massimo Tivoli, Danny Weyns, Jon Whittle
(Dagstuhl Seminar Participant Authors)
Contact Emails: r.delemos@kent.ac.uk, holger.giese@hpi.uni-potsdam.de
ABSTRACT
Software’s ability to adapt at run-time to changing user
needs, system intrusions or faults, changing operational en-
vironment, and resource variability has been proposed as
a means to cope with the complexity of today’s software-
intensive systems. Such self-adaptive systems can config-
ure and reconfigure themselves, augment their functionality,
continually optimize themselves, protect themselves, and re-
cover themselves, while keeping most of their complexity
hidden from the user and administrator. In this paper, we
present research road map for software engineering of self-
adaptive systems focusing on four views, which we identify
as essential: requirements, modelling, engineering, and as-
surances.
Keywords
Software engineering, requirements engineering, modelling,
evolution, assurances, self-adaptability, self-organization, self-
management
1. INTRODUCTION
The simultaneous explosion of information, the integration
of technology, and the continuous evolution from software-
intensive systems to ultra-large-scale (ULS) systems requires
new and innovative approaches for building, running and
managing software systems [18]. A consequence of this con-
tinuous evolution is that software systems must become more
Th
is roa
d map paper is a result of the Dagstuhl Seminar
08031 on “Software Engineering for Self-Adaptive Systems”
in January 2008.
versatile, flexible, resilient, dependable, robust, energy-efficient,
recoverable, customizable, configurable, or self-optimizing
by adapting to changing operational contexts and environ-
ments. The complexity of current software-based systems
has led the software engineering community to look for in-
spiration in diverse related fields (e.g., robotics, artificial in-
telligence) as well as other areas (e.g., biology) to find new
ways of designing and managing systems and services. In
this endeavour, the capability of the system to adjust its
behaviour in response to its perception of the environment
and the system itself in form of self-adaptation has become
one of the most promising directions.
The topic of self-adaptive systems has been studied within
the different research areas of software engineering, includ-
ing, requirements engineering, software architectures, mid-
dleware, component-based development, and programming
languages, however most of these initiatives have been iso-
lated and until recently without a formal forum for dis-
cussing its diverse facets. Other research communities that
have also investigated this topic from their own perspec-
tive are even more diverse: fault-tolerant computing, dis-
tributed systems, biologically inspired computing, distrib-
uted artificial intelligence, integrated management, robotics,
knowledge-based systems, machine learning, control theory,
etc. In addition, research in several application areas and
technologies has grown in importance, for example, adapt-
able user interfaces, autonomic computing, dependable com-
puting, embedded systems, mobile ad hoc networks, mobile
and autonomous robots, multi-agent systems, peer-to-peer
applications, sensor networks, service-oriented architectures,
and ubiquitous computing.
It is important to emphasise that in all the above ini-
tiatives the common element that enables the provision of
self-adaptability is software because of its flexible nature.
However, the proper realization of the self-adaptation func-
tionality still remains a significant intellectual challenge, and
only recently have the first attempts in building self-adaptive
systems emerged within specific application domains. More-
over, little endeavour has been made to establish suitable
software engineering approaches for the provision of self-
adaptation. In the long run, we need to establish the foun-
Dagstuhl Seminar Proceedings 08031
Software Engineering for Self-Adaptive Systems
http://drops.dagstuhl.de/opus/volltexte/2008/1500
1

dat
ions that enable the systematic development of future
generations of self-adaptive systems. Therefore it is worth-
while to identify the commonalities and differences of the
results achieved so far in the different fields and look for
ways to integrate them.
The development of self-adaptive systems can be viewed
from two perspectives, either top-down when considering an
individual system, or bottom-up when considering cooper-
ative systems. Top-down self-adaptive systems assess their
own behaviour and change it when the assessment indicates
a need to adapt due to evolving functional or non-functional
requirements. Such systems typically operate with an ex-
plicit internal representation of themselves and their global
goals. In contrast, bottom-up self-adaptive systems (self-
organizing systems) are composed of a large number of com-
ponents that interact locally according to simple rules. The
global behaviour of the system emerges from these local in-
teractions.
1
The global behaviour of the system emerges
from these local interactions, and it is difficult to deduce
properties of the global system by studying only the lo-
cal properties of its parts. Such systems do not necessar-
ily use internal representations of global properties or goals;
they are often inspired by biological or sociological phenom-
ena. The two cases of self-adaptive behaviour in the form
of individual and cooperative self-adaptation are two ex-
treme poles. In practice, the line between both is rather
blurred, and compromises will often lead to an engineering
approach incorporating representatives from these two ex-
treme poles. For example, ultra large-scale systems need
both top-down self-adaptive and bottom-up self-adaptive
characteristics (e.g., the Web is basically decentralized as
a global system but local sub-webs are highly centralized).
However, from the perspective of software development the
major challenge is how to accommodate in a systematic en-
gineering approach traditional top-down approaches with
bottom-up approaches.
The goal of this road map paper is to summarize and
point out the current state-of-the-art, its limitations, and
identify critical challenges for the software engineering of
self-adaptive systems. Specifically, we intend to focus on
development methods, techniques, and tools that seem to be
required to support the systematic development of complex
software systems with dynamic self-adaptive behaviour. In
contrast to merely speculative and conjectural visions and
ad hoc approaches for systems supporting self-adaptability,
the objective of this paper is to establish a road map for
research and identify the main research challenges for the
systematic software engineering of self-adaptive systems.
To present and motivate these challenges, the paper is
structured using the four views which have been identified
as essential. Each of these views are roughly presented in
terms of the state of the art and the challenges ahead. We
1
I
n
context of biologically inspired systems usually self-
organization rather than self-adaptation is used and similar
to our initial characterization we distinguish “strong self-
organizing systems,” which are those systems where there is
no explicit central control either internal or external (bot-
tom up); from “weak self-organizing systems,” which are
those systems where, from an internal point of view, there
is re-organization maybe under an internal (central) con-
trol or planning (top-down). Strong self-organizing systems
are thus purely decentralized, access to global information
is limited or impossible, interactions occur locally (among
neighbours) and based on local information [13].
first review the state of the art and needs concerning require-
ments (Section 2). Then, the relevant modelling dimensions
are discussed in Section 3 before we discuss the engineering
of self-adaptive systems in Section 4. The considerations
are completed by looking into the current achievements and
needs for assurance in the context of self-adaptive systems
in Section 5. Finally, the findings are summarized in Section
6 in terms of lessons learned and future challenges.
2. REQUIREMENTS
A self-adaptive system is able to modify its behaviour ac-
cording to changes in its environment. As such, a self-
adaptive system must continuously monitor changes in its
context and react accordingly. But what aspects of the envi-
ronment should the self-adaptive system monitor? Clearly,
the system cannot monitor everything. And exactly what
should the system do if it detects a less than optimal pat-
tern in the environment? Presumably, the system still needs
to maintain a set of high-level goals that should be main-
tained regardless of the environmental conditions. But non-
critical goals could well be relaxed, thus allowing the system
a degree of flexibility during or after adaptation.
These questions (and others) form the core considerations
for building self-adaptive systems. Requirements engineer-
ing is concerned with what a system ought to do and within
which constraints it must do it. Requirements engineer-
ing for self-adaptive systems, therefore, must address what
adaptations are possible and what constrains how those adap-
tations are carried out. In particular, questions to b e ad-
dressed include: what aspects of the environment are rel-
evant for adaptation? Which requirements are allowed to
vary or evolve at runtime and which must always be main-
tained? In short, requirements engineering for self-adaptive
systems must deal with uncertainty because the expecta-
tions on the environment frequently vary over time.
2.1 State of the Art
Requirements engineering for self-adaptive systems ap-
pears to b e a wide open research area with only a limited
number of approaches yet considered. Cheng and Atlee [7]
report on some previous work on specifying and verifying
adaptive software, and on run-time monitoring of require-
ments conformance [19, 42]. They also explain how prelim-
inary work on personalized and customized software can be
applied to adaptive systems (e.g., [47, 31]). In addition,
some research approaches have successfully used goal mod-
els as a foundation for specifying the autonomic behaviour
[29] and requirements of adaptive systems [22].
One of the main challenges that self-adaptation poses is
that when designing a self-adaptive system, we cannot as-
sume that all adaptations are known in advance that is,
we cannot anticipate requirements for the entire set of pos-
sible environmental conditions and their respective adapta-
tion specifications. For example, if a system is to resp ond
to cyber-attacks, one cannot possibly know all attacks in
advance since malicious actors develop new attack types all
the time.
As a result, requirements for self-adaptive systems may
involve degrees of uncertainty or may necessarily be specified
as “incomplete”. The requirements specification therefore
should cope with:
the incomplete information about the environment and
2

the
resulting incomplete information about the respec-
tive behaviour that the system should expose
the evolution of the requirements at runtime
2.2 Research Challenges
This subsection highlights a number of short-term and
long-term research challenges for requirements engineering
for self-adaptive systems. We start with shorter-term chal-
lenges and progress to more visionary ideas. As far as the
authors are aware, there is little or no research currently
underway to address these challenges.
A new requirements language. Current languages for
requirements engineering are not well suited to dealing with
uncertainty, which, as mentioned above, is a key consider-
ation for self-adaptive systems. We therefore propose that
richer requirements languages are needed. Few of the ex-
isting approaches for requirements engineering provide this
capability. In goal-modelling notations such as KAOS [11]
and i* [51], there is no explicit support for uncertainty or
adaptivity. Scenario-based notations generally do not pro-
vide any supp ort either although live sequence charts (LSCs)
[24] have a notion of mandatory versus potential behaviour
which could possibly be used in specifying adaptive systems.
Of course, the most common notation for specifying require-
ments in industry is still using natural language prose. Tra-
ditionally, requirements documents make statements such
as “the system shall do this. For self-adaptive systems, the
prescriptive notion of “shall” needs to be relaxed and could,
for example, be replaced with “the system may do this or it
may do that” or “if the system cannot do this, then it should
eventually do that.” This idea leads to a new requirements
vocabulary for self-adaptive systems that gives stakeholders
the flexibility to account for uncertainty in their require-
ments documents. For example:
Traditional RE:
the system shall do this ... .
Adaptive RE:
the system might do this ... .
But it may do this...” ... “as long as it does this ... .
the system ought to do this... .” but “if it cannot, it
shall eventually do this ...
Such a vocabulary would change the level of discourse
in requirements from prescriptive to flexible. There would
need to be a clear definition of terms, of course, as well as
a composition calculus for defining how the terms relate to
each other and compose. Multimo dal logics and p erhaps
new adaptation-oriented logics [53] need to be developed to
specify the semantics for what it means to have the “possi-
bility” of conditions [17, 40]. There is also a relationship
with variability management mechanisms in software prod-
uct lines [48], which also tackle built-in flexibilities. How-
ever, at the requirements level, one ideally would capture
uncertainty at a more abstract level than simply enumerat-
ing alternatives.
Mapping to architecture. Given a new requirements lan-
guage that explicitly handles uncertainty, it will be necessary
to provide systematic methods for refining models in this
language down to specific architectures that support run-
time adaptation. There are a variety of technical options for
implementing reconfigurability at the architecture level, in-
cluding component-based, aspect-oriented and product-line
based approaches, as well as combinations of these. Poten-
tially, there could be a large gap in expressiveness between
a requirements language that incorporates uncertainty and
these existing architecture structuring methods. One can
imagine, therefore, a semi-automated process for mapping
to architecture where heuristics and/or patterns are used to
suggest architectural units corresponding to certain vocab-
ulary terms in the requirements.
Managing uncertainty. In general, once we start intro-
ducing uncertainty into our software engineering processes,
we must have a way of managing this uncertainty and the
inevitable complexity associated with handling so many un-
knowns. Certain requirements will not change (i.e., invari-
ants), whereas others will permit a degree of flexibility. For
example, a system cannot start out as a transport robot
and self-adapt into a robot chef! Allowing uncertainty lev-
els when developing self-adaptive systems requires a trade-
off between flexibility and assurance such that the critical
high-level goals of the application are always met [52, 39,
28].
Requirements reflection. As said above, self-adaptation
deals with requirements that vary at runtime. Therefore it
is important that requirements lend themselves to b e dy-
namically observed, i.e., during execution. Reflection [34],
[27], [10] enables a system to observe its own structure and
behaviour. A relevant research work is the ReqMon tools
[38] which provides a requirements monitoring framework,
focusing on temporal properties to be maintained. Leverag-
ing and extending beyond these complementary approaches,
Finkelstein [20] coins the term “requirements reflection” that
would enable systems to be aware of their own requirements
at runtime. This would require an appropriate model of the
requirements to be available online. Such an idea brings with
it a host of interesting research questions, such as: Could
a system dynamically observe its requirements? In other
words, can we make requirements runtime objects? Future
work is needed to examine how technologies may provide the
infrastructure to do this.
Online goal refinement. As in the case of design de-
cisions that are eventually realized at runtime, new and
more flexible requirement specifications like the one sug-
gested above would imply that the system should perform
the RE processes at runtime, e.g. goal-refinement [28].
Traceability from requirements to implementation.
A constant challenge in all the topics shown above is “dy-
namic” traceability. For example, new operators of a new
RE specification language should be easily traceable down
to architecture, design, and beyond. Furthermore, if the RE
process is performed at runtime we need to assure that the
final implementation or behaviour of the system matches
the requirements. Doing so is different from the traditional
requirements traceability.
2.3 Final Remarks
In this section, we have presented several important re-
search challenges that the requirements engineering com-
munity will face as the demand for self-adaptive systems
continues to grow. These challenges span RE activities dur-
ing the development phases and runtime. In order to gain
3

assu
rance about adaptive behaviour, it is important to mon-
itor adherence and traceability to the requirements during
runtime. Furthermore, it is also necessary to acknowledge
and support the evolution of requirements at runtime. Given
the increasing complexity of applications requiring runtime
adaptation, the software artifacts with which the developers
manipulate and analyze must be more abstract than source
code. How can graphical models, formal specifications, poli-
cies, etc. be used as the basis for the evolutionary process
of adaptive systems versus source code, the traditional arti-
fact that is manipulated once a system has been deployed?
How can we maintain traceability among relevant artifacts,
including the code? How can we maintain assurance con-
straints during and after adaptation? How much should a
system be allowed to adapt and still maintain traceability
to the original system? Clearly, the ability to dynamically
adapt systems at runtime is an exciting and powerful ca-
pability. The RE community, among other software engi-
neering disciplines, need to be proactive in tackling these
complex challenges in order to ensure that useful and safe
adaptive capabilities are provided to the adaptive systems
developers.
3. MODELLING
Endowing a system with a self-adaptive property can take
many different shapes. The way self-adaptation has to be
conceived depends on various aspects, such as, user needs,
environment characteristics, and other system properties.
Understanding the problem and selecting a suitable solution
requires precise models for representing important aspects of
the self-adaptive system, its users, and its environment.
In this section, we provide a classification of modelling
dimensions for self-adaptive systems. Each dimension de-
scribes a particular aspect of the system that is relevant for
self-adaptation. Note that it is not our ambition to be ex-
haustive in all possible dimensions, but rather to give an ini-
tial impetus towards defining a framework for modelling self-
adaptive systems. Some of these dimensions could equally
be applied to the environment and the users of the system
(in addition to other specific dimensions), but here we have
focused on the system itself.
For the identification of the system modelling dimensions,
two perspectives were considered: the abstraction levels as-
sociated with the system, and the activities associated with
the adaptation. The first perspective refers to the require-
ments (e.g., goals), the design (e.g., architecture), and the
code of the software system, and the second refers to the key
activities of the feedback control loop, i.e., collect, analyse,
decide, and act.
In the following, we present the dimensions in term of
three groups. First, we introduce the modelling dimensions
that can be associated with the adaptation activities of the
feedback control loop, giving special emphasis to decision
making. The other two groups are related to non-functional
properties, i.e., timing and dependability, that are particu-
larly relevant to some classes of self-adaptive systems. The
proposed modelling framework is presented in the context
of an illustrative case from the class of embedded systems,
however, these dimensions were equally useful in describing
the self-adaptation properties of an IT change management
system.
3.1 Illustrative Case
As an illustrative scenario, we consider the problem of ob-
stacle/vehicle collisions in the domain of unmanned vehicles
(UVs). A concrete application could be the DARPA Grand
Challenge contest [44]. Each UV is provided with an au-
tonomous control software system (ACS) to drive the vehicle
from start to destination along the road network. The ACS
takes into account the regular traffic environment, including
the traffic infrastructure and other vehicles. The scenario
we envision is the one in which there is a UV driving on the
road through a region where people and animals can cross
the road unexpectedly. To anticipate possible collisions, the
ACS is extended with a self-adaptable control system (SCS).
The SCS monitors the environment and controls the vehicle
when a human being or an animal is detected in front of the
vehicle. In case an obstacle is detected, the SCS manoeu-
vres the UV around the obstacle negotiating other obstacles
and vehicles. Thus, the SCS extends the ACS with self-
adaptation to avoid collisions with obstacles on the road.
3.2 Overview of Modelling Dimensions
We give overview of the important modelling dimensions
per group. Each dimension is illustrated with an example
from the illustrative case.
Adaptation
The first group describes the modelling dimensions related
to adaptation.
Type of adaptability. The type of adaptability refers
to the particular kind of adaptation applied. The domain
of type of adaptability ranges from parametric to composi-
tional. Self-adaptivity can be realized by simple local para-
metric changes of a system component, for example, or it
can involve major architectural level structural changes. In
the illustrative case, to avoid collisions with obstacles, the
SCS has to adjust the movements of the UV, and this might
imply adjusting parameters in the steering gear.
Degree of automation. The automation dimension refers
to the degree of human intervention required for self-adaptation.
The domain of degree of automation ranges from autonomous
to human-based. Adaptive systems may be fully automatic
requiring no human intervention, or the system may require
human decision making, or at least confirmation or approval.
In the illustrative example, the UV has to avoid collisions
with animals without any human intervention.
Form of organization. The form of organization refers to
the type of organization used to realize self-adaptation. The
domain of form of organization ranges from weak (or central-
ized) to strong (or decentralized). In a strong organization,
the behaviour of comp onents reflect their local environment,
there is no global model of the system. Driven by changing
requirements, the components change their structure or be-
haviour to self-adapt the system. This self-organizing form
of self-adaptation can be collaborative, market-based, and so
on. In a weak organization, adaptation is achieved through a
global system model, which incorporates a feedback control
loop, for example. A self-adaptive subsystem monitors the
base system possibly maintaining an explicit representation
of the system, and based on a set of high-level goals, the
structure or b ehaviour of the system is adapted. Section 4
elaborates on the different forms of organization to realize
self-adaptation. The SCS of the UV in the illustrative ex-
ample seems to fit naturally with a weak organization.
4

Te
chniques for adaptability. Techniques for adaptabil-
ity refer to the way self-adaptation is accomplished. The do-
main of techniques for adaptability ranges from data-oriented
to process-oriented [46]. In a data-oriented approach, the
system is characterised as acted upon, by providing the cri-
teria for identifying objects, often by modelling the objects
themselves. In a process-oriented approach, the system is
characterised as sensed, by providing the means for produc-
ing or generating objects having the desired characteristics.
In the illustrative case, the SCS will monitor the environ-
ment for obstacles that suddenly appear in front of the vehi-
cle and subsequently guide the vehicle around the obstacle
to avoid a collision. To realize this form of self-adaptability,
the SCS senses the environment of the UV, and depending
on the controller, which is part of the system model, it pro-
duces the appropriate system output.
Place of change. The place of change refers to the loca-
tion where self-adaptation takes place. The domain of place
of change includes the values application, middleware, or in-
frastructure. Self-adaptation can be realized by monitoring
and adapting the application logic, the supporting middle-
ware, or the infrastructure that defines the system. In the
illustrative case, self-adaptation is realized by the SCS that
is part of the application logic.
Abstraction of adaptability. This modelling dimension
refers to abstraction level at which self-adaptation is ap-
plied. The domain of abstraction of adaptability refers to
requirements, design, and implementation, and their respec-
tive products, for example, goals, architectures and code.
An example of adaptation at the design level is the dynamic
reconfiguration of the system architecture. Another exam-
ple of adaptation at the design level can be the selection of
an alternative algorithm. An example of adaptation at the
level of code is dynamic weaving of additional code. To avoid
collisions, the SCS may pass particular control information
to the ACS which seems to fit best at the abstraction level
of design.
Impact of adaptability. This modelling dimension refers
to the impact that adaptation might have up on the system.
The domain of impact of adaptability ranges from specific
to generic. Adaptability is specific if it affects a particular
component or part of the system. On the other hand, if the
adaptability affects the whole system, its impact is generic.
In the illustrative case, if the the steering gear fails, the self-
adaptation would be generic since collision avoidance affects
the overall system’s behaviour.
Trigger of adaptability. This modelling dimension refers
whether the agent of change is either internal or external to
the system. A failure in a system component is considered
as an internal trigger for reconfiguring the system structure
or changes the services it provides, while the existence of an
obstacle is an external trigger since the system has to change
its behaviour in order to avoid a collision.
In addition to the above modelling dimensions that can be
applied to the system as a whole, there are some dimensions
related specifically to the key activities of the feedback con-
trol loop. In the following, we present two of that modelling
dimensions that are related to decision making.
Degree of decision making. The degree of decision mak-
ing expresses to what extent self-adaptation is defined in
advance. The domain ranges from static (or pre-defined) to
dynamic (or run-time). For static decision making, the sce-
narios of self-adaptation are exhaustively defined before the
system is deployed. For dynamic decision making, the deci-
sion of self-adaptation will be made during execution based
on a set of high-level goals. In the illustrative example, the
SCS monitors the environment and decides at run-time when
it has to take control over the ACS to avoid collisions.
Techniques for decision making. This modelling dimen-
sion refers to the procedures and methods used to determine
when to apply self-adaptation. Values of the domain of tech-
niques for decision making are utility functions, case-based
reasoning, etc. The SCS will likely use a reasoning-like ap-
proach to determine when the vehicle is in collision range
with an obstacle.
Timing
The second group describes modelling dimensions related to
timing issues.
Responsiveness. The responsiveness of self-adaptation re-
lates to the answering or replying of the self-adaptation. The
domain ranges from guaranteed to best-effort. For critical
scenarios, self-adaptation is required to be guaranteed, how-
ever, in less-critical situations, best-effort will suffice. In the
illustrative example, the SCS must guarantee that the UV
reacts effectively to avoid collisions with possibly a human
being.
Performance. The performance dimension refers to the de-
gree of predictability of self-adaptation. The domain ranges
from predictable to degradable. In time-critical cases, the
self-adaptable system often needs to act in a highly pre-
dictable manner. In other cases, a graceful degradation of
the system is acceptable. In the illustrative case, when an
obstacle appears, the SCS will manoeuvre the UV in such a
way that a collision should be avoided. In order to accom-
plish this task predictably, other system tasks might have
their performance affected.
Triggering. The triggering dimension of self-adaptation
refers to the initiation of the adaptation process. The do-
main of triggering ranges from event to time. The cause for
self-adaptation is event triggered when the process is ini-
tiated whenever there is a significant change in the state,
i.e., an event. The cause for self-adaptation is time trig-
gered when the process is initiated at predetermined points
in time. Obstacles in the illustrative case appear unexp ect-
edly and as such triggering of self-adaptation is event-based.
Dependability
The third and final group we consider describes modelling
dimensions related to dependability, that is, the ability of
a system to deliver a service that can justifiably be trusted
[1].
Reliability, availability, confidentiality. Reliability, avail-
ability, and confidentiality are attributes of dependability.
The domain of each of these properties ranges from high to
low. In the illustrative case, the reliability of the SCS avoid-
ing a collision is expected to be high.
Safety. The safety dimension refers to absence of catastrophic
consequences on the user and the environment, which can be
caused by the self-adaptation. The domain of safety ranges
5

Citations
More filters
Book ChapterDOI
01 Jan 2013
TL;DR: In this paper, the authors present the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems, focusing on four essential topics of selfadaptation: design space for selfadaptive solutions, software engineering processes, from centralized to decentralized control, and practical run-time verification & validation.
Abstract: The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.

783 citations

Journal ArticleDOI
TL;DR: Work on models@run.time seeks to extend the applicability of models produced in model-driven engineering (MDE) approaches to the runtime environment by developing adaptation mechanisms that leverage software models, referred to as models@ run.
Abstract: Runtime adaptation mechanisms that leverage software models extend the applicability of model-driven engineering techniques to the runtime environment. Contemporary mission-critical software systems are often expected to safely adapt to changes in their execution environment. Given the critical roles these systems play, it is often inconvenient to take them offline to adapt their functionality. Consequently, these systems are required, when feasible, to adapt their behavior at runtime with little or no human intervention. A promising approach to managing complexity in runtime environments is to develop adaptation mechanisms that leverage software models, referred to as models@run. time. Work on models@run.time seeks to extend the applicability of models produced in model-driven engineering (MDE) approaches to the runtime environment. Models@run. time is a causally connected self-representation of the associated system that emphasizes the structure, behavior, or goals of the system from a problem space perspective.

616 citations

Book ChapterDOI
TL;DR: The state-of-the-art in engineering self-adaptive systems is explored and the critical challenges the community must address to enable systematic and well-organized engineering of self- Adaptive and self-managing software systems are identified.
Abstract: To deal with the increasing complexity of software systems and uncertainty of their environments, software engineers have turned to self-adaptivity. Self-adaptive systems are capable of dealing with a continuously changing environment and emerging requirements that may be unknown at design-time. However, building such systems cost-effectively and in a predictable manner is a major engineering challenge. In this paper, we explore the state-of-the-art in engineering self-adaptive systems and identify potential improvements in the design process. Our most important finding is that in designing self-adaptive systems, the feedback loops that control self-adaptation must become first-class entities. We explore feedback loops from the perspective of control engineering and within existing self-adaptive systems in nature and biology. Finally, we identify the critical challenges our community must address to enable systematic and well-organized engineering of self-adaptive and self-managing software systems.

600 citations


Cites background from "Software Engineering for Self-Adapt..."

  • ...Some of the ideas developed in this paper have initially been presented elsewhere [26,36,56]....

    [...]

Book ChapterDOI
TL;DR: A simple notation for describing interacting MAPE loops is contributed, which is used to describe a number of existing patterns of interacting MAPe loops, to begin to fulfill (a) and (b), and numerous remaining research challenges in this area are outlined.
Abstract: Self-adaptation is typically realized using a control loop. One prominent approach for organizing a control loop in self-adaptive systems is by means of four components that are responsible for the primary functions of self-adaptation: Monitor, Analyze, Plan, and Execute, together forming a MAPE loop. When systems are large, complex, and heterogeneous, a single MAPE loop may not be sufficient for managing all adaptation in a system, so multiple MAPE loops may be introduced. In self-adaptive systems with multiple MAPE loops, decisions about how to decentralize each of the MAPE functions must be made. These decisions involve how and whether the corresponding functions from multiple loops are to be coordinated (e.g., planning components coordinating to prepare a plan for an adaptation). To foster comprehension of self-adaptive systems with multiple MAPE loops and support reuse of known solutions, it is crucial that we document common design approaches for engineers. As such systematic knowledge is currently lacking, it is timely to reflect on these systems to: (a) consolidate the knowledge in this area, and (b) to develop a systematic approach for describing different types of control in self-adaptive systems. We contribute with a simple notation for describing interacting MAPE loops, which we believe helps in achieving (b), and we use this notation to describe a number of existing patterns of interacting MAPE loops, to begin to fulfill (a). From our study, we outline numerous remaining research challenges in this area.

333 citations

Journal ArticleDOI
TL;DR: A taxonomy of self-adaptation and a survey on engineering SASs are presented and a new perspective on SAS including context adaptation is motivated.

323 citations

References
More filters
Book
01 Jan 1988

5,375 citations

Book
01 Jan 1984

4,603 citations

Book
01 Jan 1967
TL;DR: This book presents a control engineering methodology that, while based on mathematical fundamentals, stresses physical system modeling and practical control system designs with realistic system specifications.
Abstract: From the Publisher: For more than twenty years, Modern Control Systems has set the standard of excellence for undergraduate control systems textbooks. It has remained a bestseller because Richard Dorf and Robert Bishop have been able to take complex control theory and make it exciting and accessible to students. The book presents a control engineering methodology that, while based on mathematical fundamentals, stresses physical system modeling and practical control system designs with realistic system specifications.

3,363 citations


"Software Engineering for Self-Adapt..." refers background in this paper

  • ...The control loop is a central element of control theory, which provides wellestablished mathematical models, tools, and techniques to analyze system performance, stability, sensitivity, or correctness [30, 31]....

    [...]

Journal ArticleDOI
01 Apr 1993
TL;DR: An approach to requirements acquisition is presented which is driven by higher-level concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc.
Abstract: Requirements analysis includes a preliminary acquisition step where a global model for the specification of the system and its environment is elaborated This model, called requirements model, involves concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc The paper presents an approach to requirements acquisition which is driven by such higher-level concepts Requirements models are acquired as instances of a conceptual meta-model The latter can be represented as a graph where each node captures an abstraction such as, eg, goal, action, agent, entity, or event, and where the edges capture semantic links between such abstractions Well-formedness properties on nodes and links constrain their instances-that is, elements of requirements models Requirements acquisition processes then correspond to particular ways of traversing the meta-model graph to acquire appropriate instances of the various nodes and links according to such constraints Acquisition processes are governed by strategies telling which way to follow systematically in that graph; at each node specific tactics can be used to acquire the corresponding instances The paper describes a significant portion of the meta-model related to system goals, and one particular acquisition strategy where the meta-model is traversed backwards from such goals The meta-model and the strategy are illustrated by excerpts of a university library system

2,092 citations


Additional excerpts

  • ...In goal-modelling notations such as KAOS [14] and i [15], there is no explicit support for uncertainty or adaptivity....

    [...]

Proceedings ArticleDOI
Eric Yu1
TL;DR: This paper argues that a different kind of modelling and reasoning support is needed for the early phase of requirements engineering, which aims to model and analyze stakeholder interests and how they might be addressed, or compromised, by various system-and-environment alternatives.
Abstract: Requirements are usually understood as stating what a system is supposed to do, as apposed to how it should do it. However, understanding the organizational context and rationales (the "Whys") that lead up to systems requirements can be just as important for the ongoing success of the system. Requirements modelling techniques can be used to help deal with the knowledge and reasoning needed in this earlier phase of requirements engineering. However most existing requirements techniques are intended more for the later phase of requirements engineering, which focuses on completeness, consistency, and automated verification of requirements. In contrast, the early phase aims to model and analyze stakeholder interests and how they might be addressed, or compromised, by various system-and-environment alternatives. This paper argues, therefore, that a different kind of modelling and reasoning support is needed for the early phase. An outline of the i* framework is given as an example of a step in this direction. Meeting scheduling is used as a domain example.

1,743 citations


"Software Engineering for Self-Adapt..." refers background in this paper

  • ...In goal-modelling notations such as KAOS [14] and i! [15], there is no explicit support for uncertainty or adaptivity....

    [...]

Frequently Asked Questions (10)
Q1. What have the authors contributed in "Software engineering for self-adaptive systems: a research road map (draft version)∗" ?

In this paper, the authors present research road map for software engineering of selfadaptive systems focusing on four views, which they identify as essential: requirements, modelling, engineering, and assurances. 

Some of the fields have been mentioned in this paper, like, control theory, but other fields from which software engineering might get some inspiration for the development of self-adaptive systems are, decision theory, non-classic computation, and computer networks. 

Technologies like, model driven development, aspect-oriented programming, and software product lines might offer new opportunities in the development of self-adaptive systems, and change the processes by which these systems are developed. 

Another typical scheme from control engineering is organizing multiple control loops in the form of a hierarchy where, due to the employed different time periods, unexpected interference between the levels can be excluded. 

Because of the separation of concerns (i.e., model reference, adaptive algorithm, controller and process), this solution is a solid starting point for the design of self-adaptive software-intensive systems. 

There are a variety of technical options forimplementing reconfigurability at the architecture level, including component-based, aspect-oriented and product-line based approaches, as well as combinations of these. 

Garlan and Schmerl also advocate to make self-adaptation external, as opposed to be internal or hard-wired, to separate the concerns of system functionality from the concerns of self-adaptation [8]. 

Due to this high dynamism, V&V methods traditionally applied at requirements and design stages of development must be supplemented with run-time assurance techniques. 

Other research communities that have also investigated this topic from their own perspective are even more diverse: fault-tolerant computing, distributed systems, biologically inspired computing, distributed artificial intelligence, integrated management, robotics, knowledge-based systems, machine learning, control theory, etc. 

It can be also argued that current state-of-the-art engineering practices are not sufficiently mature to warrant self-adaptive functionality.