scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Towards a stochastic model for integrated security and dependability evaluation

TL;DR: A new approach to integrated security and dependability evaluation, which is based on stochastic modelling techniques, and opens up for use traditional Markov analysis to make new types of probabilistic predictions for a system, such as its expected time to security failure.
Abstract: We present a new approach to integrated security and dependability evaluation, which is based on stochastic modelling techniques. Our proposal aims to provide operational measures of the trustworthiness of a system, regardless if the underlying failure cause is intentional or not. By viewing system states as elements in a stochastic game, we can compute the probabilities of expected attacker behavior, and thereby be able to model attacks as transitions between system states. The proposed game model is based on a reward-and cost concept. A section of the paper is devoted to the demonstration of how the expected attacker behavior is affected by the parameters of the game. Our model opens up for use traditional Markov analysis to make new types of probabilistic predictions for a system, such as its expected time to security failure.

Summary (3 min read)

1 Introduction

  • Security is a concept addressing the attributes confidentiality, integrity and availability [6].
  • Dependability, on the other hand, is the ability of a com- ∗”Centre for Quantifiable Quality of Service in Communication Systems, Centre of Excellence” appointed by the Research Council of Norway, funded by the Research Council, NTNU and UNINETT.
  • The security community can benefit from the mature dependability modelling techniques, which can provide the operational measures that are so desirable today.
  • In Section 3, the authors show that the states can be viewed as elements in a stochastic game, and explain how game theory can be used to compute the expected attacker behavior.
  • In Section 6 the authors compare their work with previous related work.

2 The Stochastic Model

  • This high level description can be used to perform qualitative assessment of system properties, such as the security levels obtained by Common Criteria evaluation [7].
  • Moreover, such methods only evaluate static behavior of the system and do not consider dependencies of events or time aspects of failures.
  • To create a model suitable for quantitative analysis and assessment of operational security, one needs to use a fine-granular system description, which is capable of incorporating the dynamic behavior of a system.
  • During its operational lifetime, a system will alternate between the different states.
  • This may be due to normal usage as well as misuse, administrative measures and maintenance, as well as software- and hardware failures and repairs.

2.1 The Failure Process

  • It has been shown in [2, 9, 16] that the ”fault-errorfailure” pathology, which is commonly used for modelling the failure process in a dependability context, can be applied in the security domain as well.
  • Based on the results from this research the authors demonstrate how a stochastic process can be used to model security failures in a similar way as the dependability community usually treats accidental and unintentional failures.
  • An error is always internal and will not be visible from outside the system.
  • For each failure state which conflicts with the system’s intended functionality, the authors can therefore assign a corresponding property that is violated, e.g. confidentiality-failed or availability-failed.
  • Even though the time, or effort, to perform an intrusion may be randomly distributed, the decision to perform the action is not.

2.2 Modelling Intrusion as Transitions

  • According to [16], there are two underlying causes of any intrusion: A malicious action that tries to exploit the vulnerability.
  • In reality, other types of distributions may be more suitable.) with rate λij(a), where i and j are two different states in the stochastic model.
  • By introducing the decision probability πi(a), the result from a successful attack, i.e. a malicious external humanmade fault, can be modelled as one or more intentional state changes of the underlying stochastic process, which represents the dynamic behavior of the system.

2.3 Obtaining Steady State Probabilities

  • In mathematical terms, the stochastic process describing the dynamic system behavior is a continuous time Markov chain with discrete state space.
  • Similarly, by making the failure states absorbing, i.e. removing all outgoing transitions, one can compute the mean time to first failure (MTFF ) for a system.

2.4 Model Parameterization

  • The procedure of obtaining accidental failure- and repair rates has been practiced for many years in traditional dependability analysis, and will therefore not be discussed in this paper.
  • The most straightforward solution is to let security experts assess the rates based on subjective expert opinion, empirical data or a combination of both.
  • An example of empirical data is historical attack data collected from honeypots.
  • In [17, 18], the authors propose game theory as a means for computing the expected attacker strategy.
  • The procedure is summarized in the next section.

3 Computing Expected Attacker Behavior

  • From the stochastic model, pick all states where the system is vulnerable to malicious faults.
  • For all transitions out of the game element states which represent intrusions, identify the corresponding attack actions.
  • For each game element, the authors assign two values to each attack action; one that represents the reward gained by the attacker if the action remains undetected, and another to represent the negative reward, i.e. cost, experienced if the action is detected and reacted to.

4 Tuning Parameters of the Game

  • The game model presented in the previous section is based on a reward- and cost concept.
  • Furthermore, if the action succeeds, additional rewards may be gained.
  • The reward values will therefore represent the attackers’ motivation when deciding on attack actions.
  • The cost of a detected action will be an important demotivation factor when modelling, for example, insiders - legitimate users who override their current privileges.
  • Similarly, commercial adversaries would lose reputation and market share if it is exposed that illegal means are used.

4.1 Attacker Profiling

  • Rogers summarizes earlier research on hacker categorization and provides a new taxonomy based on a two dimensional classification model.
  • Skill and motivation are identified as the primary classification criteria, which fit well into their mathematical framework where an attacker’s skill is represented by the expected time to success, λ−1(a), and the motivation by the rewardand cost concept.
  • Rogers’ model suggests eight primary categories, whereof seven represent outsiders: ”novices”, ”cyber-punks”, ”petty thieves”, ”virus writers”, ”old guard hackers”, ”professional criminals” and ”information warriors”.
  • The authors model does not depend on any attacker classification.
  • Instead, in their approach it is possible to tune the reward- and cost values of the game elements and thereby be able to model the motivation of any kind of attacker.

4.2 Varying the Cost Parameters

  • Hence, an attacker can choose either to perform the attack (a), or to resign (φ).
  • Hence, an increasing cost of a detected action will decrease the attackers’ motivation.
  • In general, as Proceedings of the First International Conference on Availability, Reliability and Security (ARES’06).
  • It is interesting to note that even though measures are taken to increase the cost of detected actions (legal proceedings, for instance), a rapidly decreasing b will only have marginal effect on the behavior of an attacker who has a strong reluctance of resigning.

5 Case Study: The DNS Service

  • To further illustrate the approach, the authors model and analyze the security and dependability of a DNS service.
  • The most important attributes of this service are integrity and availability.
  • All the three states 1-3 are considered to be good states.
  • The state transition model in Figure 2 in Section 3 represents the security and dependability of the service of a single DNS server under the given assumptions.
  • The transitions labeled with the μS and μH rates represent the accidental software and hardware failures, the ϕ rates represent an imaginary system administrator’s possible actions and the λ rates represent the success rates of the possible attack actions.

7 Concluding Remarks

  • This paper presents a stochastic model for integrated security and dependability assessment.
  • By using stochastic game theory the authors can compute the expected attacker behavior for different types of attackers.
  • In the final step, the corresponding stochastic process is used to compute operational measures of the system.
  • As pointed out in Section 3, the Nash equilibrium of the game will be an indication of the best strategy for attackers who do not know the probabilities that their actions will be detected.
  • This may not always be the case in real life.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Towards a Stochastic Model for Integrated
Security and Dependability Evaluation
Karin Sallhammar, Bjarne E. Helvik and Svein J. Knapskog
Centre for Quantifiable Quality of Service
Norwegian University of Science and Technology
O.S. Bragstads plass 2E, N-7491 Trondheim, Norway
{sallhamm, bjarne, knapskog}@q2s.ntnu.no
Abstract
We present a new approach to integrated security and
dependability evaluation, which is based on stochastic mod-
elling techniques. Our proposal aims to provide operational
measures of the trustworthiness of a system, regardless if the
underlying failure cause is intentional or not. By viewing
system states as elements in a stochastic game, we can com-
pute the probabilities of expected attacker behavior, and
thereby be able to model attacks as transitions between sys-
tem states. The proposed game model is based on a reward-
and cost concept. A section of the paper is devoted to the
demonstration of how the expected attacker behavior is af-
fected by the parameters of the game. Our model opens up
for use traditional Markov analysis to make new types of
probabilistic predictions for a system, such as its expected
time to security failure.
1 Introduction
Security is a concept addressing the attributes confiden-
tiality, integrity and availability [6]. Today it is widely ac-
cepted that, due to the unavoidable presence of vulnerabili-
ties, design faults and administrative errors, an ICT system
will never be totally secure. Connecting a system to a net-
work will necessarily introduce a risk of inappropriate ac-
cess resulting in disclosure, corruption and/or loss of infor-
mation. Therefore, the security of a system should ideally
be interpreted in a probabilistic manner. More specifically,
there is an urgent need for modelling methods that provide
operational measures of the security.
Dependability, on the other hand, is the ability of a com-
”Centre for Quantifiable Quality of Service in Communication Sys-
tems, Centre of Excellence” appointed by the Research Council of
Norway, funded by the Research Council, NTNU and UNINETT.
http://www.q2s.ntnu.no/
puter system to deliver service that can justifiably be trusted.
It is a generic concept, which includes the attributes relia-
bility, availability, safety, integrity and maintainability [2].
In a dependability context one distinguishes between acci-
dental faults, which are modelled as random processes, and
intentional faults, i.e. attacks, which in most cases are not
considered at all. A major drawback of this approach is
that attacks may in many cases be the dominating failure
source for today’s networked systems. The classical way
of dependability evaluation can therefore be very deceptive:
highly dependable systems may in reality fail much more
frequently than expected, due to the exploitation from at-
tackers.
A unified modelling framework for security and depend-
ability evaluation would be advantageous from both points
of view. The security community can benefit from the ma-
ture dependability modelling techniques, which can provide
the operational measures that are so desirable today. On the
other hand, by adding hostile actions to the set of possible
fault sources, the dependability community will be able to
make more realistic models than the ones that are currently
in use.
Modelling and analysis of a system for predictive pur-
poses can be performed by static or dynamic methods. This
paper focuses on the dynamic method of using stochastic
models (Markov chains), which is commonly used to ob-
tain availability (the fraction of time the system is opera-
tional during an observation period) or reliability (the prob-
ability that the system remains operational over an obser-
vation period) predictions by the dependability community.
The paper is organized as follows. Section 2 presents the
stochastic model and explains how intrusions can be mod-
elled as transition between states in the model. In Section
3, we show that the states can be viewed as elements in a
stochastic game, and explain how game theory can be used
to compute the expected attacker behavior. Then, in Sec-
tion 4, we demonstrate how the expected attacker behavior
Proceedings of the First International Conference on Availability, Reliability and Security (ARES’06)
0-7695-2567-9/06 $20.00 © 2006
IEEE

is affected by the parameters of the game. To illustrate the
approach, Section 5 includes a small case study. In Sec-
tion 6 we compare our work with previous related work.
Section 7 includes some concluding remarks and points to
future work.
2 The Stochastic Model
At the highest level of a system description is the speci-
fication of the system’s functionality. The security policy is
normally a part of this specification. This high level descrip-
tion can be used to perform qualitative assessment of system
properties, such as the security levels obtained by Common
Criteria evaluation [7]. Even though a qualitative evaluation
can be used to rank a particular security design, its main
focus is on the safeguards introduced during the develop-
ment and design of the system. Moreover, such methods
only evaluate static behavior of the system and do not con-
sider dependencies of events or time aspects of failures. As
a consequence, the achieved security level cannot be used
to predict the system’s actual behavior, i.e. its ability to
withstand attacks when running in a certain threat environ-
ment. To create a model suitable for quantitative analysis
and assessment of operational security, one needs to use a
fine-granular system description, which is capable of incor-
porating the dynamic behavior of a system. This is the main
strength of state transition models where, at a low level, the
system is modelled as a finite state machine (most systems
consist of a set of interacting components and the system
state is therefore the set of its component states). During
its operational lifetime, a system will alternate between the
different states. This may be due to normal usage as well as
misuse, administrative measures and maintenance, as well
as software- and hardware failures and repairs. In a state
transition model, one usually discriminates between good
states and failed states. Normally, a system will be subject
to multiple failure cases, so that the model will have multi-
ple failure modes.
2.1 The Failure Process
It has been shown in [2, 9, 16] that the ”fault-error-
failure” pathology, which is commonly used for modelling
the failure process in a dependability context, can be ap-
plied in the security domain as well. Based on the results
from this research we demonstrate how a stochastic process
can be used to model security failures in a similar way as
the dependability community usually treats accidental and
unintentional failures.
By definition, the fault-error-failure process is a se-
quence of events. A fault is an atomic phenomenon, that
can be either internal or external, which causes an error in
the system. An error is a deviation from the correct oper-
ation of the system. An error is always internal and will
not be visible from outside the system. Even though a sys-
tem is erroneous it still manages to deliver its intended ser-
vices. An error may lead to a failure of the system. In a
dependability context, a failure is an event that causes the
delivered service to deviate from the correct service, as de-
scribed in the system’s functional specification. Similarly,
a security failure causes a system service to deviate from
its security requirements, as specified in the security policy.
For each failure state which conflicts with the system’s in-
tended functionality, we can therefore assign a correspond-
ing property that is violated, e.g. confidentiality-failed or
availability-failed. Both security- and dependability fail-
ures can be caused by a number of accidental fault sources,
such as erroneous user input, administrative misconfigura-
tion, software bugs, hardware deterioration, etc. The fail-
ures originating from most of these faults can be modelled
as randomly distributed in time, as is common practice in
dependability modelling and analysis. However, the ones
hardest to predict are the external malicious human-made
faults, which are introduced with the objective of altering
the functioning of the system during use [2]. In a security
context, the result of these faults is generally referred to as
an intrusion. Because they are intentional in nature, intru-
sions cannot be modelled as truly random processes. Even
though the time, or effort, to perform an intrusion may be
randomly distributed, the decision to perform the action is
not. As pointed out in [13], security analysis must assume
that an attacker’s choice of action will depend on the sys-
tem state, may change over time, and will result in security
failures that are highly correlated.
2.2 Modelling Intrusion as Transitions
To be able to model the effect of an intrusion as a transi-
tion between a good system state and a failed system state,
we need to take a closer look at the intrusion process itself.
According to [16], there are two underlying causes of any
intrusion:
At least one vulnerability, i.e. weakness, in the system.
The vulnerability is possible to exploit, however it will
require a certain amount of time from an attacker.
A malicious action that tries to exploit the vulnerabil-
ity. Since the action is intentional, a decision is implic-
itly made by the attacker. All attackers will not choose
the same course of action, hence there will be a prob-
ability that an attacker decides to perform a particular
action.
An intrusion will therefore result from an action which has
been successful in exploiting a vulnerability. In this paper
Proceedings of the First International Conference on Availability, Reliability and Security (ARES’06)
0-7695-2567-9/06 $20.00 © 2006
IEEE

we model the expected time to exploit a vulnerability when
using action a as negatively exponentially distributed (this
is primarily to simplify analytical assessment of the model.
In reality, other types of distributions may be more suitable.)
with rate λ
ij
(a), where i and j are two different states in
the stochastic model. To formalize the idea of an attacker’s
decision, we define π
i
(a) as the probability that an attacker
will choose action a when the system is in state i. In a low
level system abstraction model, the successful intrusion will
cause a transition of the system state, from the good state i
to the failed state j. Hence, the failure rate between state
i and j may be computed as q
ij
= π
i
(a) · λ
ij
(a). This is
illustrated in Figure 1 where the good state i =1is depicted
as a circle and the failed state j =2as a square.
ʌ
1
(a)Ȝ
12
(a)
OK Security
failed
1 2
Figure 1. A two-state Markov model with as-
signed failure rate.
By introducing the decision probability π
i
(a), the result
from a successful attack, i.e. a malicious external human-
made fault, can be modelled as one or more intentional state
changes of the underlying stochastic process, which repre-
sents the dynamic behavior of the system.
2.3 Obtaining Steady State Probabilities
In mathematical terms, the stochastic process describing
the dynamic system behavior is a continuous time Markov
chain with discrete state space. Let
X(t) = {X
1
(t),X
2
(t),...,X
N
(t)}, (1)
where X
i
(t) denotes the probability that the system is in
state i at time t. Formally, the interactions between the
states i =1,...,N are described in the N × N state-
transition rate matrix Q, whose elements are
q
ij
=
8
<
:
lim
dt0
n
Pr
(transition from i to j in(t,t+dt))
dt
o
,i= j
P
j=i
q
ij
,i= j
.
(2)
The element q
ij
Q, (i = j), represents the transition rate
between state i and j in the model and is, if the transition is
caused by an intrusion, constructed from a decision proba-
bility and a success rate, as explained in Section 2.2. If the
initial state of the system, i.e. X(0), is known, the steady
state probabilities X
i
= lim
t→∞
X
i
(t),i =1,...,N can be
obtained by solving the set of N equations given by N 1
of the N equations
XQ = 0, (3)
and with the N th equation
N
l=1
X
l
=1. (4)
The steady state probabilities provide us with the possibility
of obtaining operational measures of the system, such as the
mean between failures (MTBF) or the mean time spent in
the good states (MUT). Similarly, by making the failure
states absorbing, i.e. removing all outgoing transitions, one
can compute the mean time to first failure (MTFF) for a
system. See e.g. [4] for details.
2.4 Model Parameterization
In order to obtain measures, the stochastic model has to
be parameterized. The procedure of obtaining accidental
failure- and repair rates has been practiced for many years in
traditional dependability analysis, and will therefore not be
discussed in this paper. However, choosing the λ
ij
(a)
1
’s,
i.e. the expected times to succeed with attacks given that
they are pursued, remains a challenge. The most straight-
forward solution is to let security experts assess the rates
based on subjective expert opinion, empirical data or a com-
bination of both. An example of empirical data is historical
attack data collected from honeypots. The data can also be
based on intrusion experiments performed by, for example,
students in a controlled environment. Empirical data from
such an experiment conducted at Chalmers University of
Technology in Sweden [8] indicates that the time between
successful intrusions during the standard attack phase is ex-
ponentially distributed. Even though the process of assess-
ing the exploit rates is crucial, and an important research
topic in itself, it is not the primary focus of this paper.
Obtaining realistic π
i
(a)s, i.e. the probabilities that an
attacker chooses particular attack actions in certain system
states, may be more difficult. In [17, 18], we propose game
theory as a means for computing the expected attacker strat-
egy. The procedure is summarized in the next section.
3 Computing Expected Attacker Behavior
In this section we demonstrate how a game theoretic
model can be used to compute the expected attacker behav-
ior, in terms of a set of strategies π = {π
i
}. The procedure
is as follows:
Step 1: Identify the game elements. From the stochastic
model, pick all states where the system is vulnerable to ma-
licious faults. Each of these states can be viewed as a game
Proceedings of the First International Conference on Availability, Reliability and Security (ARES’06)
0-7695-2567-9/06 $20.00 © 2006
IEEE

element Γ
i
in the two-player, zero-sum, stochastic game Γ.
For example, in Figure 2 the shaded states 2, 3 and 4 rep-
resent states where the system is vulnerable to attacks and
which have the game elements Γ
2
, Γ
3
and Γ
4
, respectively.
Integrity
failed (SW)
μ
H
μ
H
μ
H
ij
61
μ
H
μ
H
ʌ
3
(a
3
)Ȝ
35
S
ij
12
ʌ
4
(a
3
)Ȝ
45
S
ʌ
3
(a
2
)Ȝ
34
ʌ
2
(a
1
)Ȝ
23
μ
S
μ
S
ij
51
ij
31
ij
21
ij
41
OK
(good)
Availability
failed (SW)
OK
(vuln.)
OK
(login)
1 2 3
5 4
Availability
failed (HW)
6
Figure 2. State transition model of DNS
server (cf. Section 5) with game elements
identified.
Step 2: Construct the action set. The next step is to con-
struct the action set A, which consists of all possible attack
actions. For all transitions out of the game element states
which represent intrusions, identify the corresponding at-
tack actions. Note that there will always be an inaction φ,
which represents that an attacker takes no action at all. The
action set is the complete set of all these actions, φ included.
All actions will not necessarily be available in all states; we
use A
i
to refer to the set of actions available in state i.In
Fig. 2 the complete action set is A = {a
1
,a
2
,a
3
},how-
ever A
2
= {a
1
}, A
3
= {a
2
,a
3
} and A
4
= {a
3
}.
Step 3: Assign reward and cost values. To model the
attackers’ motivation we make use of a reward- and cost
concept. For each game element, we assign two values to
each attack action; one that represents the reward gained by
the attacker if the action remains undetected, and another
to represent the negative reward, i.e. cost, experienced if
the action is detected and reacted to. These values are de-
noted r
i
(a|undetected) and r
i
(a|detected), respectively. Re-
ward and cost are generic concepts, which can be used to
quantify the payoff of the actions both in terms of abstract
values, such as social status and satisfaction versus disre-
spect and disappointment, as well as real values, e.g. fi-
nancial gain and loss. For instance, in [12] the reward of a
successful attack action is the expected amount of recovery
effort required from the system administrator and in [11]
the reward is the degree of bandwidth occupied by a DDoS
attack. In contrast to [11, 12], we use the cost as an alter-
native outcome of the game to represent the fact that risk
averse attackers may sometimes refrain from certain attack
actions due to the possible consequences of detection. This
topic will be further discussed in Section 4.
Step 4: Compute transition probabilities between the
game states. Given that action a is chosen, there is a prob-
ability that the intrusion will succeed and the game will con-
tinue. The transition probability between game elements
can therefore be computed by conditioning on the chosen
action. For the example in Figure 2: if the system is in
state 2 and an attacker decides to perform action a
1
, then
π
2
(a
1
)=1. Hence, the transition probability between
game elements 2 and 3 for this particular ”play of the game”
is computed as
p
23
(a
1
)=
λ
23
λ
23
+ ϕ
21
+ μ
S
+ μ
H
(5)
Step 5: Solve the game model. The last step is to solve
the game model. Recall that A
i
is the set of actions available
in state i. Each game element Γ
i
is therefore represented by
a |A
i
2 matrix, which has the form
Γ
i
=
0
B
B
@
.
.
.
.
.
.
γ
1
(a) γ
2
(a))
.
.
.
.
.
.
1
C
C
A
=
0
B
B
B
B
@
.
.
.
.
.
.
r
i
(a|undetected)+
P
Γ
j
Γ
p
ij
(a
j
r
i
(a|detected)
.
.
.
.
.
.
1
C
C
C
C
A
(6)
Solving the model means to compute the best strategies for
the players who participate in the game. Our model relies
on the basic assumption of game theory, which states that a
rational player will always try to maximize her own reward.
For each system state i, which is modelled as a game ele-
ment Γ
i
, we can therefore expect an attacker to behave in
accordance with the probability distribution π
i
= {π
i
(a)}
that maximizes E(π
i
i
), where
E(π
i
i
)=
X
aA
i
π
i
(a)
`
1θ
i
(a)
´
γ
1
(a)+θ
i
(a)γ
2
(a)
«
. (7)
θ
i
(a) is the probability that action a will be detected in
state i. The probability distribution π
i
that maximizes (7)
is called the optimal strategy of Γ
i
. An attacker who does
not know θ
i
should think of the system as a counterplayer
in the game who tries to minimize the attacker’s reward.
Hence, the optimal strategy of Γ
i
is obtained by solving
max
π
i
min
θ
i
E(π
i
i
), (8)
which is denoted the Nash Equilibrium of Γ
i
. To find the
optimal strategies for all game elements in the stochastic
Proceedings of the First International Conference on Availability, Reliability and Security (ARES’06)
0-7695-2567-9/06 $20.00 © 2006
IEEE

game, one can use a set of inductive formulas. For further
details on the underlying assumptions and solution of the
game model, the reader is referred to [17, 18], or [15, pp.
96–101].
4 Tuning Parameters of the Game
The game model presented in the previous section is
based on a reward- and cost concept. Whenever an attacker
performs an attack action, he immediately receives a re-
ward. Furthermore, if the action succeeds, additional re-
wards may be gained. The reward values will therefore rep-
resent the attackers’ motivation when deciding on attack ac-
tions. We use negative rewards, i.e. costs, to make room for
the possibility that some attackers may be more risk averse
than others. The cost of a detected action will be an im-
portant demotivation factor when modelling, for example,
insiders - legitimate users who override their current privi-
leges. Similarly, commercial adversaries would lose reputa-
tion and market share if it is exposed that illegal means are
used. In this section we demonstrate how the cost param-
eters in our game model will affect the expected attacker
behavior.
4.1 Attacker Profiling
To distinguish between different types of attackers, it
is common practice to make use of attacker profiles. A
number of fine-granular classifications of attackers exist in
the literature. In [1] Rogers summarizes earlier research
on hacker categorization and provides a new taxonomy
based on a two dimensional classification model. Skill
and motivation are identified as the primary classification
criteria, which fit well into our mathematical framework
where an attacker’s skill is represented by the expected
time to success, λ
1
(a), and the motivation by the reward-
and cost concept. Rogers’ model suggests eight primary
categories, whereof seven represent outsiders: ”novices”,
”cyber-punks”, ”petty thieves”, ”virus writers”, ”old guard
hackers”, ”professional criminals” and ”information war-
riors”. The eighth category is ”internals”. Our model does
not depend on any attacker classification. Instead, in our
approach it is possible to tune the reward- and cost values
of the game elements and thereby be able to model the mo-
tivation of any kind of attacker.
4.2 Varying the Cost Parameters
To illustrate the effect of the cost parameters, we use a
generic 2 × 2 game element
Γ
i
=
γ
1
(a) γ
2
(a)
γ
1
(φ) γ
2
(φ)
=
1 b
c 0
(9)
The generic game element in (9) represents a system state
i where the system is vulnerable to one single attack action
a. Hence, an attacker can choose either to perform the at-
tack (a), or to resign (φ). By varying b and c we can now
demonstrate how the relation γ
2
(a)
1
(a) (i.e. the cost of
a detected attack versus the reward of an undetected attack)
and γ
1
(φ)
1
(a) (i.e. the cost associated with resigning
versus the reward of an undetected attack) will affect the
attackers’ expected behavior, in terms of the attack proba-
bility π
i
(a). To compute π
i
= {π
i
(a)
i
(φ)} we use (8),
i.e. the Nash Equilibrium of Γ
i
.
Example 1: reducing b. If b = 2 and c = 3 in (9),
then the expected probability of attacking will be π
i
(a)=
0.50. However, if the cost of a detected action is increased
to b = 10, then π
i
(a)=0.21. Hence, an increasing cost
of a detected action will decrease the attackers’ motivation.
Example 2: reducing c. Again, if b = 2 and c = 3
in (9), then π
i
(a)=0.50. However, if c = 10, then
π
i
(a)=0.77. As the cost of resigning increases, the at-
tackers’ motivation will increase.
Figure 3. The expected attacker behavior
π
i
(a) w.r.t. b and c.
Figure 3 depicts a more complete graph of risk averse
attackers’ expected behavior. In the graph we let 9
b, c 1. One can see that the expected probability of at-
tacking is highest, π
i
(a)=1.0, when b =1. This is intu-
itive since an attacker who receives the same payoff whether
she is detected or not will always choose to attack. On the
other hand, the expected probability of attacking is lowest,
π
i
(a)=0.0, when c>0 and b<0. This can be interpreted
as if the reward of an attack is small enough, so that it is not
significantly greater than the cost of resigning, an attacker
may not even bother to try. (Remark: this is an ideal situ-
ation which is unlikely to occur in real life). In general, as
Proceedings of the First International Conference on Availability, Reliability and Security (ARES’06)
0-7695-2567-9/06 $20.00 © 2006
IEEE

Citations
More filters
Journal ArticleDOI
TL;DR: This paper reviews the existing game-theory based solutions for network security problems, classifying their application scenarios under two categories, attack-defense analysis and security measurement and discusses the limitations of those game theoretic approaches and proposes future research directions.
Abstract: As networks become ubiquitous in people's lives, users depend on networks a lot for sufficient communication and convenient information access. However, networks suffer from security issues. Network security becomes a challenging topic since numerous new network attacks have appeared increasingly sophisticated and caused vast loss to network resources. Game theoretic approaches have been introduced as a useful tool to handle those tricky network attacks. In this paper, we review the existing game-theory based solutions for network security problems, classifying their application scenarios under two categories, attack-defense analysis and security measurement. Moreover, we present a brief view of the game models in those solutions and summarize them into two categories, cooperative game models and non-cooperative game models with the latter category consisting of subcategories. In addition to the introduction to the state of the art, we discuss the limitations of those game theoretic approaches and propose future research directions.

243 citations


Cites background or methods from "Towards a stochastic model for inte..."

  • ...[30], [31], [36], [37] also fall into this subclass....

    [...]

  • ...In fact, the prediction of the strategies in many approaches to security and dependability measurement is used as input for a measurement module [11], [29], [30], [31] in order to compute the metrics of security and dependability....

    [...]

  • ...In [30], the following three concepts are introduced: a real time method to measure the security metrics, the mean time to next failure (MTNF), and the probability that the time until the next failure is greater than a given time for an attacker target....

    [...]

  • ...• Dynamic game • Stochastic games [8], [20], [21], [22], [26], [29], [30], [31], [36], [37], [44] • Problem : • to determine the best strategies for the administrator to diffuse the risks among the asserts in a network against the attacker [44] • to obtain best optimal defense strategy [8], [20], [21], [36] • to evalute secutiy and dependability level [22], [26], [29], [30], [31], [37] • The state transition of a system is a Markov process [21], [29], [44] • Use Q-learning to obtain the converging optimal strategies when the transition matrix is not known [44] Non-cooperative • Use Shapley’s method [35] to calculate the Nash Equilibrium games of the game [29] • Use a method called NPL 1 in [34] to obtain the Nash Equilibrium of the game [21] • Repeated security investment game between network users, two or more players [47]...

    [...]

Proceedings ArticleDOI
20 Nov 2009
TL;DR: This paper presents a new classification of dependability and security models, and presents several individual model types such as availability, confidentiality, integrity, performance, reliability, survivability, safety and maintainability.
Abstract: There is a need to quantify system properties methodically. Dependability and security models have evolved nearly independently. Therefore, it is crucial to develop a classification of dependability and security models which can meet the requirement of professionals in both fault-tolerant computing and security community. In this paper, we present a new classification of dependability and security models. First we present the classification of threats and mitigations in systems and networks. And then we present several individual model types such as availability, confidentiality, integrity, performance, reliability, survivability, safety and maintainability. Finally we show that each model type can be combined and represented by one of the model representation techniques: combinatorial (such as reliability block diagrams (RBD), reliability graphs, fault trees, attack trees), state-space (continuous time Markov chains, stochastic Petri nets, fluid stochastic Petri nets, etc) and hierarchical (e.g., fault trees in the upper level and Markov chains in the lower level). We show case studies for each individual model types as well as composite model types.

107 citations

Journal ArticleDOI
TL;DR: This work develops a systematic approach to perform a cost-benefit analysis on the problem of optimal security hardening under such conditions, and model the attacker-defender interaction as an “arms race”, and explores how security controls can be placed in a network to induce a maximum return on investment.
Abstract: Researchers have previously looked into the problem of determining whether a given set of security hardening measures can effectively make a networked system secure. However, system administrators are often faced with a more challenging problem since they have to work within a fixed budget which may be less than the minimum cost of system hardening. An attacker, on the other hand, explores alternative attack scenarios to inflict the maximum damage possible when the security controls are in place, very often rendering the optimality of the controls invalid. In this work, we develop a systematic approach to perform a cost-benefit analysis on the problem of optimal security hardening under such conditions. Using evolutionary paradigms such as multi-objective optimization and competitive co-evolution, we model the attacker-defender interaction as an “arms race”, and explore how security controls can be placed in a network to induce a maximum return on investment.

92 citations


Cites background from "Towards a stochastic model for inte..."

  • ...propose the use of stochastic game theory to compute probabilities to attacker actions [16,17]....

    [...]

Journal ArticleDOI
TL;DR: A deep survey of the state-of-the-art model-based quantitative NSMs proposed, along with an in-depth discussion on relevant characteristics of the surveyed proposals and open research issues of the topic.
Abstract: Network security metrics (NSMs) based on models allow to quantitatively evaluate the overall resilience of networked systems against attacks. For that reason, such metrics are of great importance to the security-related decision-making process of organizations. Considering that over the past two decades several model-based quantitative NSMs have been proposed, this paper presents a deep survey of the state-of-the-art of these proposals. First, to distinguish the security metrics described in this survey from other types of security metrics, an overview of security metrics, in general, and their classifications is presented. Then, a detailed review of the main existing model-based quantitative NSMs is provided, along with their advantages and disadvantages. Finally, this survey is concluded with an in-depth discussion on relevant characteristics of the surveyed proposals and open research issues of the topic.

74 citations


Cites methods from "Towards a stochastic model for inte..."

  • ...[53], [54] propose a model to estimate the Mean Time to First Failure (MTFF) metric....

    [...]

Journal ArticleDOI
TL;DR: A new approach to integrated security and dependability evaluation, which is based on stochastic modeling techniques, and opens up for use of traditional Markov analysis to make new types of probabilistic predictions for a system, such as its expected time to security failure.
Abstract: This paper presents a new approach to integrated security and dependability evaluation, which is based on stochastic modeling techniques. Our proposal aims to provide operational measures of the trustworthiness of a system, regardless if the underlying failure cause is intentional or not. By viewing system states as elements in a stochastic game, we can compute the probabilities of expected attacker behavior, and thereby be able to model attacks as transitions between system states. The proposed game model is based on a reward- and cost concept. A section of the paper is devoted to the demonstration of how the expected attacker behavior is affected by the parameters of the game. Our model opens up for use of traditional Markov analysis to make new types of probabilistic predictions for a system, such as its expected time to security failure.

63 citations

References
More filters
Journal ArticleDOI
TL;DR: The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of systems failures.
Abstract: This paper gives the main definitions relating to dependability, a generic concept including a special case of such attributes as reliability, availability, safety, integrity, maintainability, etc. Security brings in concerns for confidentiality, in addition to availability and integrity. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability and security (faults, errors, failures), their attributes, and the means for their achievement (fault prevention, fault tolerance, fault removal, fault forecasting). The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of system failures.

4,695 citations


"Towards a stochastic model for inte..." refers background in this paper

  • ...…the ability of a com- ∗”Centre for Quantifiable Quality of Service in Communication Systems, Centre of Excellence” appointed by the Research Council of Norway, funded by the Research Council, NTNU and UNINETT. http://www.q2s.ntnu.no/ puter system to deliver service that can justifiably be trusted....

    [...]

  • ...In a security context, the result of these faults is generally referred to as an intrusion....

    [...]

  • ...Dependability, on the other hand, is the ability of a com- ∗”Centre for Quantifiable Quality of Service in Communication Systems, Centre of Excellence” appointed by the Research Council of Norway, funded by the Research Council, NTNU and UNINETT. http://www.q2s.ntnu.no/ puter system to deliver service that can justifiably be trusted....

    [...]

  • ...It has been shown in [2, 9, 16] that the ”fault-errorfailure” pathology, which is commonly used for modelling the failure process in a dependability context, can be applied in the security domain as well....

    [...]

01 Jan 2007
TL;DR: In this paper, the main definitions relating to dependability, a generic concept including a special case of such attributes as reliability, availability, safety, integrity, maintainability, etc.
Abstract: This paper gives the main definitions relating to dependability, a generic concept including a special case of such attributes as reliability, availability, safety, integrity, maintainability, etc. Security brings in concerns for confidentiality, in addition to availability and integrity. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability and security (faults, errors, failures), their attributes, and the means for their achievement (fault prevention, fault tolerance, fault removal, fault forecasting). The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of system failures.

4,335 citations

Journal ArticleDOI
TL;DR: It is found that many techniques from dependiveness evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks.
Abstract: The development of techniques for quantitative, model-based evaluation of computer system dependability has a long and rich history. A wide array of model-based evaluation techniques is now available, ranging from combinatorial methods, which are useful for quick, rough-cut analyses, to state-based methods, such as Markov reward models, and detailed, discrete-event simulation. The use of quantitative techniques for security evaluation is much less common, and has typically taken the form of formal analysis of small parts of an overall design, or experimental red team-based approaches. Alone, neither of these approaches is fully satisfactory, and we argue that there is much to be gained through the development of a sound model-based methodology for quantifying the security one can expect from a particular design. In this work, we survey existing model-based techniques for evaluating system dependability, and summarize how they are now being extended to evaluate system security. We find that many techniques from dependability evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks.

537 citations


"Towards a stochastic model for inte..." refers background in this paper

  • ...As pointed out in [13], security analysis must assume that an attacker’s choice of action will depend on the system state, may change over time, and will result in security failures that are highly correlated....

    [...]

Journal ArticleDOI
TL;DR: Quantitative measures that estimate the effort an attacker might expend to exploit these vulnerabilities to defeat the system security objectives are proposed and a set of tools has been developed to compute such measures and used in an experiment to monitor a large real system for nearly two years.
Abstract: This paper presents the results of an experiment in security evaluation. The system is modeled as a privilege graph that exhibits its security vulnerabilities. Quantitative measures that estimate the effort an attacker might expend to exploit these vulnerabilities to defeat the system security objectives are proposed. A set of tools has been developed to compute such measures and has been used in an experiment to monitor a large real system for nearly two years. The experimental results are presented and the validity of the measures is discussed. Finally, the practical usefulness of such tools for operational security monitoring is shown and a comparison with other existing approaches is given.

409 citations

Journal ArticleDOI
TL;DR: In this paper, the interactions between an attacker and an administrator were modeled as a two-player stochastic game and a nonlinear program was used to compute Nash equilibria or best-response strategies for the players (attacker and administrator).
Abstract: This paper presents a game-theoretic method for analyzing the security of computer networks. We view the interactions between an attacker and the administrator as a two-player stochastic game and construct a model for the game. Using a nonlinear program, we compute Nash equilibria or best-response strategies for the players (attacker and administrator). We then explain why the strategies are realistic and how administrators can use these results to enhance the security of their network.

388 citations

Frequently Asked Questions (13)
Q1. What are the contributions in "Towards a stochastic model for integrated security and dependability evaluation" ?

The authors present a new approach to integrated security and dependability evaluation, which is based on stochastic modelling techniques. A section of the paper is devoted to the demonstration of how the expected attacker behavior is affected by the parameters of the game. By viewing system states as elements in a stochastic game, the authors can compute the probabilities of expected attacker behavior, and thereby be able to model attacks as transitions between system states. 

In the future the authors plan to investigate whether timedependent success rates can be used to compute more realistic strategies ( they must assume that attackers learn over time ! ). Furthermore, verifying the model ’ s ability to predict real-life attacks will require further research, including validation of the model against empirical data. 

Reward and cost are generic concepts, which can be used to quantify the payoff of the actions both in terms of abstract values, such as social status and satisfaction versus disrespect and disappointment, as well as real values, e.g. financial gain and loss. 

This paper focuses on the dynamic method of using stochastic models (Markov chains), which is commonly used to obtain availability (the fraction of time the system is operational during an observation period) or reliability (the probability that the system remains operational over an observation period) predictions by the dependability community. 

Using the rate values λ23 = 1/3, λ34 = λ35 = λ45 = 3, ϕ12 = 1/480, ϕ21 = 1/120, ϕ31 = ϕ41 = 1, ϕ51 = 3, ϕ61 = 1/24, μH = 1/3600 and μS = 1/120 per hour, together with a fictitious set of reward- and cost values, the game elements becomeΓ2 = (1 + 0.952Γ3 −4 −5 0) ,Γ3 = ⎛ ⎝1 + 0.748Γ4 −31 −2−5 0⎞ ⎠ ,Γ4 = (1 −2 −5 0) .Solving the stochastic game in accordance to (8) provides the strategy vectors π2 = (0.568, 0.432), π3 = (0, 0.625, 0.375) and π4 = (0.625, 0.375), hence, the state transition rate matrix for the DNS server is as displayed in Table 1.Using (3) and (4) in Section 2.3, the authors compute the stationary probabilities X = {X1, . . . , X6} = {0.98, 0.01, 6.50 · 10−4, 0, 3.16 · 10−3, 6.62 · 10−3}. 

This is the main strength of state transition models where, at a low level, the system is modelled as a finite state machine (most systems consist of a set of interacting components and the system state is therefore the set of its component states). 

In mathematical terms, the stochastic process describing the dynamic system behavior is a continuous time Markov chain with discrete state space. 

by making the failure states absorbing, i.e. removing all outgoing transitions, one can compute the mean time to first failure (MTFF ) for a system. 

Skill and motivation are identified as the primary classification criteria, which fit well into their mathematical framework where an attacker’s skill is represented by the expected time to success, λ−1(a), and the motivation by the rewardand cost concept. 

It is interesting to note that even though measures are taken to increase the cost of detected actions (legal proceedings, for instance), a rapidly decreasing b will only have marginal effect on the behavior of an attacker who has a strong reluctance of resigning. 

The authors distinguish between two different types of accidental failures: hardware availability failures which require a manual repair, and software availability failures, which only require a system reconfiguration and/or reboot. 

This may transfer the system into a third state (3), and thereby make it possible to insert false entries in the server cache (software integrity failure) or to shut the server down (software availability failure). 

1. Hence, the transition probability between game elements 2 and 3 for this particular ”play of the game” is computed asp23(a1) = λ23λ23 + ϕ21 + μS + μH (5)Step 5: Solve the game model.