scispace - formally typeset
Open AccessProceedings ArticleDOI

Security and privacy requirements analysis within a social setting

Reads0
Chats0
TLDR
A methodological framework for dealing with security and privacy requirements based on i*, an agent-oriented requirements modeling language is proposed, which supports a set of analysis techniques and helps identify potential system abusers and their malicious intents.
Abstract
Security issues for software systems ultimately concern relationships among social actors stakeholders, system users, potential attackers - and the software acting on their behalf. We propose a methodological framework for dealing with security and privacy requirements based on i*, an agent-oriented requirements modeling language. The framework supports a set of analysis techniques. In particular, attacker analysis helps identify potential system abusers and their malicious intents. Dependency vulnerability analysis helps detect vulnerabilities in terms of organizational relationships among stakeholders. Countermeasure analysis supports the dynamic decision-making process of defensive system players in addressing vulnerabilities and threats. Finally, access control analysis bridges the gap between security requirement models and security implementation models. The framework is illustrated with an example involving security and privacy concerns in the design of agent-based health information systems. In addition, we discuss model evaluation techniques, including qualitative goal model analysis and property verification techniques based on model checking.

read more

Content maybe subject to copyright    Report

Security and Privacy Requirements Analysis within a Social Setting
Lin Liu
1
Eric Yu
2
John Mylopoulos
1
1
Department of Computer Science, University of Toronto, Toronto, Canada, M5S 1A4
{liu, jm}@cs.toronto.edu
2
Faculty of Information Studies, University of Toronto, Toronto, Canada, M5S 3G6
yu@fis.utoronto.ca
Abstract
Security issues for software systems ultimately
concern relationships among social actors -
stakeholders, system users, potential attackers - and the
software acting on their behalf. This paper proposes a
methodological framework for dealing with security and
privacy requirements based on i*, an agent-oriented
requirements modeling language. The framework
supports a set of analysis techniques. In particular,
attacker analysis helps identify potential system abusers
and their malicious intents. Dependency vulnerability
analysis helps detect vulnerabilities in terms of
organizational relationships among stakeholders.
Countermeasure analysis supports the dynamic decision-
making process of defensive system players in
addressing vulnerabilities and threats. Finally, access
control analysis bridges the gap between security
requirement models and security implementation models.
The framework is illustrated with an example involving
security and privacy concerns in the design of agent-
based health information systems. In addition, we
discuss model evaluation techniques, including
qualitative goal model analysis and property verification
techniques based on model checking.
1. INTRODUCTION
Security, and privacy to a lesser extent, have been active
research areas in computing for a long time. Methods
and techniques have been developed to protect data,
programs, and more recently networks, from attacks or
other infringements through mechanisms such as access
controls and firewalls. However, most techniques were
developed for earlier generations of computing
environments that were largely within a single, closed
jurisdictional control -- such as a single enterprise with a
well-defined boundary. The open Internet environment,
together with new business and organizational practices,
has increased the complexity of security and privacy
considerations dramatically. In such a setting, a system
could potentially be interacting and sharing information
with a large number of other systems, often on ad hoc
and dynamically negotiated configurations. Traditional
models and techniques for characterizing and analyzing
security and privacy are ill-equipped to deal with the
much higher social complexity that is implicit in this
new internet-based setting.
In this paper, we propose a methodological
framework for analyzing security and privacy
requirements based on the concept of strategic social
actors. The framework offers a set of security
requirements analysis facilities to help users,
administrators and designers better understand the
various threats and vulnerabilities they face, the
countermeasures they can take, and how these can be
combined to achieve the desired security and privacy
results within the broader picture of system design and
the business environment. Moreover, the analysis
process is integrated into the usual requirements process,
so that security and privacy are taken into account from
the very start all at once.
This paper builds on our work on designing trust
and role-based pattern analysis on security requirements.
In [13, 23], we use role-based mechanism to study
patterns of relationships such as trust relations, attacker-
defender relations at various levels of abstraction. These
patterns can be selectively applied and combined for
analyzing specific system configurations later on. This
idea has been integrated and extended in the attacker
analysis discussed below.
Based on our previous works in agent-oriented
software engineering [22] and non-functional
requirements [4], we recognize that, as with other non-
functional requirements, security and privacy goals must
be identified and dealt with starting from the earliest
stages of a software engineering process [24,23].
Security and privacy issues originate from human
concerns and intents, and thus should be modeled
through social concepts [24,13]. Social concepts are
extended to cover relationships among software systems
and components. Agent-based models enable richer
descriptions and analysis techniques about internet-based
environments, especially ones involving intelligent
agents. Based on these models, knowledge-based
decision support tools can help identify alternatives,
detect conflicts and synergies, understand related
implications and consequences, and through a systematic

process, eventually arrive at appropriate combinations of
proven policies, procedures, devices, and mechanisms to
achieve the desired levels of security and privacy.
The proposed security requirements analysis is
illustrated with the example of designing software agents
supporting patient-doctor interactions. Design of security
and privacy in health care information systems is a
challenging task due to the influences of complex factors
in multiple dimensions. For instance, in the social
dimension, there are both patient-physician and user-
system trust relationships. There are also regulations and
constraints along medical and financial dimensions.
Besides, adding unfamiliar new technologies such as
unified electronic medical records and software agents is
bound to make the design task even more challenging,
since problems that arise from these new dimensions
need to be taken into account.
Designing for security and privacy amounts to
answering questions such as: “ who is likely to attack the
system? By what means might a specific attacker attack
the system? Whose privacy is at risk? How to defend the
system from these threats? What are the side effects of
adding particular countermeasures?” Yet, there is no
systematic analysis technique through which one can go
from answers to these questions to particular security
and privacy solutions. Our proposal is intended to
provide mechanisms that explicitly relate social concerns
with the technologies and policies addressing these
concerns.
Section 2 introduces the basic requirement analysis
process supported by i*. We base our example on the
Guardian Angel (GA) project [20], a patient and
physician supporting system using software agents.
Section 3 discusses the extended modeling process and a
set of security- and privacy-related analysis techniques.
Section 4 describes two particular model evaluation
techniques – goal-based evaluation and model property
checking. Section 5 and section 6 discuss related work
and summarize the results of the paper.
2. Domain Requirements Analysis with i*
The solid lines and boxes on the left-hand side of Figure 1
indicate a series of basic domain requirements analysis
steps.
Actor identification answers the questions of “ who is
involved in the system?” In i* [22], an actor is used to
refer generically to any unit to which intentional
dependencies can be ascribed. Figure 2 shows some
actors in the GA domain. Actors may be further
differentiated into roles, agents, and positions. A role is
an abstract actor embodying expectations and
responsibilities, e.g., Owner, Primary User, and Administrator
of Patient Information, Guardian of Patient and Provider of
Health Care Service. An agent is a concrete actor, human
or machine, with specific capabilities and funcationalities,
e.g., Abby Kaye, Dr. Anthony, Ms. Young, GA-PDA and GA-
Hospital Module. An agent can play one or more roles. A
set of roles packaged together to be assigned to an agent
is called a position. In Figure 2, Patient is modeled as a
position which bridges the multiple abstract roles it
covers, and the real world agents occupying it. As a
simplification, other examples in this paper omit the use
of the position concept. Initially, human actors
representing stakeholders in the domain are identified
together with existing machine actors (step in Figure
1). As the analysis proceeds (step in Figure 1), more
actors are identified, including new system agents such as
GA System, GA-PDA, GA-HomePC, and GA Hospital Module,
when certain design choices have been made, and new
functional entities are added.
Goal/task identification answers the question of “ what
does the actor want to achieve?” (step in Figure1). As
shown in Figure 3, answers to this question can be
represented as goals capturing the high-level objectives of
Actor Identification
Countermeasure Identification
Attacking Measure Identification
Malicious Intent Identification
Attacker Identification
Goal/Task Identification
Vulnerability Analysis
Dependency Identification
Figure 1. Requirements Elicitation Process with i*

Fi
g
ure 3. Goal/task elicitation in the s
p
ace of alternatives for a
p
h
y
sician o
p
enin
g
a new
p
ractice
(
SR
)
Figure 2. Actors (roles, agents and position) in the GA system
Figure 4. Dependency relationships in the GA system (SD)

agents. A goal may be “ hard”, referring to a function, e.g.
Dr. Anthony wants Quality Health Care Be Delivered, or “
soft”, referring to a quality requirement, e.g. Timely
Accessibility of Medical Record. Tasks are used to represent
the specific procedures to be performed by agents, e.g.
Manage Clinician-based Record. A resource is a physical or
informational entity, about which the main concern is
whether it is available. A belief is used to represent a
domain characteristic, a design assumption or an
environmental condition.
A goal can be accomplished in different ways. For
example, the goal Medical Record Be Managed can be
achieved by performing the task Manage Clinician-Based
Record or Manage Unified Electronic Record. The tasks are
connected to the goal through means-ends links(
). A
goal is satisfied if any of its tasks is satisfied. A task may
be detailed into subgoals, subtasks, resources and
softgoals through Decomposition link ( ). All
subcomponents of a task must be satisfied in order to
accomplish the task. Such goal models can represent the
different alternatives for achieving a goal, elaborate the
necessary components for carrying out a task, and
evaluate the positive or negative contributions from tasks
to softgoals. High-level abstract softgoals are reduced
into lower-level, more specific softgoals or
operationalized in terms of tasks through contribution
link (). The refinement of goals, tasks, and softgoals
(step in Figure 1) are considered to have reached an
adequate level once all the necessary design decisions can
be made based on the existing information in the model.
The i* model in Figure 3 are created by running through
steps , , in Figure 1 iteratively.
Dependency relationship identification answers the
question “ how do the actors relate to each other?” In i*,
we focus on intentional relationships (e.g., one actor
depends on another for a goal to be achieved) rather than
on information exchanges or flows (e.g., what message an
actor send to another). A strategic dependency (SD)
model is a network of intentional dependencies
(dependency link,
), as shown in Figure 4. When the
internal rationales of agents are made explicit (as in
Figure 3), we call that a strategic rationale (SR) model.
By analyzing the dependency network in an SD model,
we can reason about opportunities and vulnerabilities.
The SD model in Figure 4 shows that Abby Kaye
depends on GA-PDA to provide medical instruction (Be
Provided [Medical Instruction]). This dependency is
accompanied by expectations on Timeliness, Accessibility,
and Comprehensiveness of the Medical Instruction. The
model is generated by running steps , and in
Figure 1 recursively. As explained above, by hiding the
internal rationales of actors in an SR model, an SD model
can be obtained. Thus, the goal, task, resource, softgoal
dependencies presented in an SD model are not added
arbitrarily, it always indicates a necessity of delegation
relationship across the actor boundary.
Dependency types are used to differentiate the kinds
of freedom allowed in a relationship. Be Provided [Medical
Instruction], being modeled as a goal dependency, indicates
that GA-PDA has full freedom to decide how to provide
instruction to Abby Kaye. Scheduling, Alerting and Notifying,
being a task dependency means that GA-PDA must follow
a prescribed course of action. A resource dependency
(e.g., Patient Data) means that the depended party
(dependee) has to make it available to the depender.
In this paper, i* models are shown graphically.
Semantics and constraints of i* are embedded in the i*
meta-framework described in Telos[15]. With the
support of Telos, consistency checks between models,
scalability management of large project, and various
other knowledge-based reasoning techniques can be
applied to i* models.
The kind of analysis shown above answers questions
such as “ Who is involved in the system? What do they
want? How can their expectations be fulfilled? And what
are the inter-dependencies between them?”. These
answers initially provide a sketch of the social setting of
the future system, and eventually result in a fairly
elaborate behavioural model where certain design choices
have already been made. However, another set of very
important questions has yet to be answered, i.e., “ What if
things go wrong? What if the GA system does not behave
as expected? How bad can things get? What prevention
tactics can be considered?” These are some of the
questions we want to answer in the security requirements
analysis process.
3. Security Requirements Analysis with i*
The dashed lines and boxes on the right hand side of
Figure 1 indicate a series of security specific analysis
steps. These steps are integrated into the basic domain
requirements engineering process, such that threats from
potential attackers are anticipated and countermeasures
for system protection are sought and equipped wherever
necessary. Each of the security related analysis steps (step
to ) will be discussed in detail in the following
subsections.
3.1 Attacker Analysis
Attacker analysis aims to identify potential system
abusers and their malicious intents. The basic premise
here is that all the actors are assumed “ guilty until proven
innocent”. In other words, given the result of the basic i*
requirements modeling process, we now consider any one
of the actors (roles, positions or agents) identified so far
can be a potential attacker to the system or to other actors.

For example, we want to ask, “ In what ways can a
physician attack the system? How will he benefit from
inappropriate information disclosure?”
In this analysis, each actor is considered in turn as an
attacker. This attacker inherits the intentions, capabilities
and social relationships of the corresponding legitimate
actor (i.e., the internal goal hierarchy and external
dependency relationships in the model). This may serve
as a starting point of a forward direction security analysis
(Step in Figure 1). A backward analysis starting from
identifying possible malicious intents and business assets
of value is also feasible here.
Proceeding to step of the process, for each
attacker identified, we combine the capabilities and
interests of the attacker with those of the legitimate actor
(Figure 5). The analysis would reveal the
commandeering of legitimate resources and capabilities
for illicit use. For example, Dr. Kohane in playing the role
of Family Doctor has access to certain patient data. While
becoming an attacker (Attacker Dr. Kohane As Family
Doctor), he will be able to Make Illegal Profit by Put Patient
Data Into Secondary Use.
Applying the above reasoning to the i* model in
Figure 2, we may identify that potential attackers to the
system are Patient Attacker, Patient Guardian Attacker, Care
Provider Attacker, Business Associate (e.g., Insurance
Company, Drug Company) Attacker and GA Software Agent
Attacker. Here, we use the term attacker to refer to the
source of any threat. Human attackers may attack
deliberately, e.g., by committing insurance fraud, hiding
malpractice evidence, and putting patient identifiable
information into secondary use. An attack can also be
accidental, e.g., accidental disclosure of embarrassing
private information. Software agents can be threats as
instruments of malicious human agents (e.g. they can be
compromised through “ hacking” or “ sniffing”) or
simply through malfunctions, e.g., misunderstanding of
user instructions, executing instructions improperly,
perform tasks not intended by the user. In any case,
software agents are considered as attackers to the system
just the same as human attackers.
The attacker identification approach introduced
above observes that all attackers are insider attackers.
We set a system boundary, then exhaustively searches
for possible attackers. In light of this, random attackers
such as Internet hackers/crackers, or attackers breaking
into a building can also be dealt within this framework
by being represented as sharing the same territory as
their victim. By conducting analysis on the infrastructure
of the Internet, we may identify attackers by treating
Internet resources as resources in i* model. By
conducting building security analysis, break-in attackers,
or attackers sharing the same workspace can be
identified. In [24], we have adopted an opposite
assumption, i.e., assume there is a trusted perimeter for
each agent, all the potential threats source within this
trusted perimeter are ignored, only threats out of the
perimeter will be protected.
Figure 5. Attacker Analysis
3.2 Dependency Vulnerability Analysis
Dependency vulnerability analysis aims at identifying
the vulnerable points in the dependency network (step
in Figure 1). The basic idea is that dependency
relationships bring vulnerabilities to the system and the
depending actor (the depender). Potential attackers may
exploit these vulnerabilities to actually attack the system,
so that their malicious intents can be served. i*
dependency modeling allows a more specific
vulnerability analysis because the potential failure of
each dependency can be traced to a depender and to its
dependers. The questions we want to answer here are
“ which dependency relationships are vulnerable to
attack?”, “ What are the chain effects if one dependency
link is compromised?”

Citations
More filters
Book ChapterDOI

On Non-Functional Requirements in Software Engineering

TL;DR: This chapter reviews the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future directions.
Journal ArticleDOI

Eliciting security requirements with misuse cases

TL;DR: In this article, a systematic approach to eliciting security requirements based on use cases, with emphasis on description and method guidelines, is presented, which extends traditional use cases to also cover misuse, and is potentially useful for several other types of extra-functional requirements beyond security.
Journal ArticleDOI

Security Requirements Engineering: A Framework for Representation and Analysis

TL;DR: The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements, and is evaluated by applying it to a security requirements analysis within an air traffic control technology evaluation project.
Proceedings ArticleDOI

Elaborating security requirements by construction of intentional anti-models

TL;DR: The paper presents a constructive approach to the modeling, specification and analysis of application-specific security requirements, based on a goal-oriented framework for generating and resolving obstacles to goal satisfaction.
Journal ArticleDOI

A privacy threat analysis framework: supporting the elicitation and fulfillment of privacy requirements

TL;DR: This paper presents a comprehensive framework to model privacy threats in software-based systems and provides an extensive catalog of privacy-specific threat tree patterns that can be used to detail the threat analysis outlined above.
References
More filters
Journal ArticleDOI

Role-based access control models

TL;DR: Why RBAC is receiving renewed attention as a method of security administration and review is explained, a framework of four reference models developed to better understandRBAC is described, and the use of RBAC to manage itself is discussed.
Book ChapterDOI

On Non-Functional Requirements in Software Engineering

TL;DR: This chapter reviews the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future directions.
Book

Safeware: System Safety and Computers

TL;DR: This chapter discusses the role of humans in Automated Systems, the nature of risk, and elements of a Safeware Program, which aims to manage Safety and Security through design and implementation.
Proceedings ArticleDOI

Towards modelling and reasoning support for early-phase requirements engineering

TL;DR: This paper argues that a different kind of modelling and reasoning support is needed for the early phase of requirements engineering, which aims to model and analyze stakeholder interests and how they might be addressed, or compromised, by various system-and-environment alternatives.
Journal ArticleDOI

Alloy: a lightweight object modelling notation

TL;DR: This paper presents the Alloy language in its entirety, and explains its motivation, contributions and deficiencies.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What have the authors contributed in "Security and privacy requirements analysis within a social setting" ?

This paper proposes a methodological framework for dealing with security and privacy requirements based on i *, an agent-oriented requirements modeling language. In addition, the authors discuss model evaluation techniques, including qualitative goal model analysis and property verification techniques based on model checking. In particular, attacker analysis helps identify potential system abusers and their malicious intents.