University of Zurich
Zurich Open Repository and Archive
Winterthurerstr. 190
CH-8057 Zurich
http://www.zora.uzh.ch
Year: 2008
A risk-based, value-oriented approach to quality requirements
Glinz, M
Glinz, M (2008). A risk-based, value-oriented approach to quality requirements. IEEE Software, 25(2):34-41.
Postprint available at:
http://www.zora.uzh.ch
Posted at the Zurich Open Repository and Archive, University of Zurich.
http://www.zora.uzh.ch
Originally published at:
IEEE Software 2008, 25(2):34-41.
Glinz, M (2008). A risk-based, value-oriented approach to quality requirements. IEEE Software, 25(2):34-41.
Postprint available at:
http://www.zora.uzh.ch
Posted at the Zurich Open Repository and Archive, University of Zurich.
http://www.zora.uzh.ch
Originally published at:
IEEE Software 2008, 25(2):34-41.
A risk-based, value-oriented approach to quality requirements
Abstract
Quality requirements, i.e. those requirements that pertain to a system's quality attributes, are
traditionally regarded to be useful only when they are represented quantitatively so that they can be
measured. This article presents a value-oriented approach to specifying quality requirements that
deviates from the classic approach. This approach uses a broad range of potential representations that
are selected on the basis of risk assessment. Requirements engineers select a quality requirement
representation such that they get an optimal balance between mitigating the risk of developing a system
that doesn't satisfy the stakeholders' desires and needs on the one hand and the cost of specifying the
requirement in the selected representation on the other hand. This issue is part of a special issue on
quality requirements.
focus
34 I E E E S O F T W A R E P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y 0 74 0 -7 4 5 9 / 0 8 / $ 2 5 . 0 0 © 2 0 0 8 I E E E
quality requirements
A Risk-Based, Value-
Oriented Approach to
Quality Requirements
Martin Glinz, University of Zurich
This value-oriented
approach to
specifying quality
requirements uses
a range of potential
representations
chosen on the basis
of assessing risk
instead of quantifying
everything.
W
hen quality requirements are elicited from stakeholders, they’re often
stated qualitatively, such as “the response time must be fast” or “we need
a highly available system.” (See the “Defining Quality Requirements”
sidebar for a definition of quality requirements.) However, qualitatively
represented requirements are ambiguous and thus difficult to verify. As a consequence, we
may encounter three kinds of problems:
1. The system developers build a system that de-
livers less than the stakeholders expect. This re-
sults in stakeholder dissatisfaction and might, in
extreme cases, render a system useless.
2. The system developers build a system that de
-
livers more than the stakeholders need. This
results in systems that are more expensive than
necessary.
3. The developers and the customer disagree
whether the delivered system meets a given
quality requirement—and there is no clear cri-
terion to decide who is right.
For example, if the stakeholders mean 7 days ×
24 hours of operation when they say “We need a
highly available system” but the developers interpret
this requirement as “at least 23 hours per working
day,” we have the first kind of problem. Conversely,
if the stakeholders would be happy with availability
from 6 a.m. to 8 p.m. on all work days while the de-
velopers build a 7×24 system with all the additional
effort to develop and operate a continuously run-
ning system, we have the second kind of problem.
Problems of type 1 typically also imply a problem
of type 3.
The traditional way of solving these problems
is to quantify all quality requirements. But quanti-
fication isn’t the best solution in all cases. Instead,
a quality requirement should be represented such
that it delivers optimum value. You can determine
such an optimal representation using a risk-based
strategy.
Quantification
Quantification means defining metrics that
make a requirement measurable (see the “Measur-
ing Quality Requirements” sidebar). For example,
we could quantify the requirement “The response
time must be fast” as “The response time shall be
less than 0.5 seconds in 98 percent of all user input
actions.” Work on quantification was pioneered by
Barry Boehm
1
and Tom Gilb,
2
among others. To-
day, this topic is broadly covered by standards
3, 4
and textbooks.
5
Some quality requirements are directly mea-
surable—that is, a single well-defined metric ad-
equately measures it. For example, performance
requirements are directly measurable. The only dif-
ficulty in this case is to get the necessary quantita-
tive input from the stakeholders—that is, motivat-
Authorized licensed use limited to: IEEE Xplore. Downloaded on December 11, 2008 at 03:39 from IEEE Xplore. Restrictions apply.
March/April 20 08 I E E E S O F T W A R E
35
ing them to specify concrete threshold values such
as “< 0.5 seconds” instead of “fast.”
On the other hand, for some quality require-
ments, such a metric doesn’t exist or its application
is too expensive. Usability is a typical example of the
first kind. We don’t have a single metric for quan-
tifying a requirement such as “The system shall be
user friendly.” Portability is an example of the sec-
ond kind. We can measure it directly with the metric
M
port
(s) = 1 − E
port
(s) / E
new
(s), where E
port
(s) is the
average effort for porting the system s to a new plat-
form and E
new
(s) is the average effort for developing
s from scratch for a given platform. So the require-
ment “The system shall be highly portable” could
be quantified as M
port
(s) ≥ 0.8. However, calculat-
ing this metric for a given system s would mean that
E
port
(s) and E
new
(s) must be measured, which in turn
would imply both porting s to the new platform and
redeveloping s from scratch for the new platform
(while keeping constant all other factors that influ-
ence the effort). Clearly, the cost of doing this is pro-
hibitively high. Also, estimating E
port
(s) and E
new
(s)
Defining Quality Requirements
The term quality requirement denotes those requirements
that pertain to a system’s attributes, such as performance at-
tributes or specific qualities. For example, the following are
quality requirements: “The system shall be user friendly,” “The
time interval between two consecutive scans of the tempera-
ture sensor shall be below two seconds,” “The probability of
successful, unauthorized intrusion into the database shall be
smaller than 10
–6
.”
The term should not be confused with the notion of require-
ments that are of high quality—those that are adequate, un-
ambiguous, consistent, verifiable, and so on.
There are different ways of positioning quality
requirements in requirements classification frame-
works. This article uses the classification shown
in figure A, where quality requirements are de-
noted as attributes.
1
In this classification, system
(or product) requirements are classified accord-
ing to their concern. Requirements pertaining to
a functional concern become functional require-
ments. A performance requirement pertains to a
performance concern. A specific quality require-
ment pertains to a quality concern other than the
quality of meeting the functional requirements.
Eventually, a constraint is a requirement that
constrains the solution space beyond what’s nec-
essary for meeting the given functional, perfor-
mance, and specific quality requirements. Qual-
ity requirements, or attributes, are performance
requirements or specific quality requirements.
The ISO/IEC 25030 standard classifies soft-
ware product requirements according to the ISO
quality model terminology (see figure B).
2
In this standard,
software quality requirements ar a subcategory of inher-
ent property requirements. The latter, together with assigned
property requirements, forms the category of software product
requirements.
References
1. M. Glinz, “On Non-Functional Requirements,” Proc. 15th IEEE Int’l Require-
ments Eng. Conf. (RE’07), IEEE CS Press, pp. 21−26.
2.
ISO/IEC 25030: Software Engineering—Software Product Quality Require-
ments and Evaluation (Square)—Quality Requirements, Int’l Organization
for Standardization, 2007.
Functionality
and behavior:
Functions
Data
Stimuli
Reactions
Behavior
Time and space
bounds:
Timing
Speed
Volume
Throughput
Physical
Legal
Cultural
Environmental
Design and
implementation
Interface
...
“-ilities”:
Reliability
Usability
Security
Availability
Portability
Maintainability
...
Functional
requirement
System
requirement
Attribute
Constraint
Performance
requirement
Specific quality
requirement
Requirement
Project
requirement
Process
requirement
Figure A. The requirements classification used in this article.
1
Functional requirements
Quality in use requirements
External quality requirements
Internal quality requirements
Managerial requirements including, for example, requirements for
price, delivery date, product future, and product supplier
Software product
requirements
Inherent property
requirements
Assigned property requirements
Software quality
requirements
Figure B. Classification of software product requirements according to ISO/IEC 25030.
2
Authorized licensed use limited to: IEEE Xplore. Downloaded on December 11, 2008 at 03:39 from IEEE Xplore. Restrictions apply.
36 I E E E S O F T W A R E w w w . c o m p u t e r. o r g / s o f t w a r e
directly (without using subcharacteristics) is impos-
sible except when an organization has ample experi-
ence both in porting and developing systems of simi-
lar size and complexity—and that’s rare.
To measure these two kinds of quality require-
ments, we must find subcharacteristics of the given
requirement that are directly measurable and whose
values are strongly correlated with the values of the
given quality requirement.
For example, the ISO/IEC 9126 standard
1-3
(see
the sidebar “Standards for Quality Requirements”)
defines usability through the subcharacteristics
understandability, learnability, operability, attrac-
tiveness, and usability compliance. Each of these
again have directly measurable subcharacteristics.
For example, a subcharacteristic of understand-
ability is completeness of description, which we
can measure as C(s) = F
u
(s) / F
tot
(s) where F
u
(s) is the
number of functions understood and F
tot
is the total
number of functions in a system s.
Practically speaking, the quantification of a qual-
ity requirement which is not directly measurable
implies first defining an appropriate set of measur-
able subcharacteristics and then determining the
required values for every subcharacteristic.
Advantages and drawbacks
The advantages of quantifying quality require-
ments are obvious: we get unambiguous, verifiable
requirements and thus reduce the risk of delivering
systems that don’t satisfy stakeholders’ desires and
needs. So it’s tempting to state “You shall quantify
all quality requirements” as the first commandment
of quality requirements engineering.
However, these advantages come with a price
tag. For example, if we quantify the requirement
“The system shall be user friendly” by using the us-
ability characteristics given in the ISO/IEC 9126,
we’d have to elicit required values for 28 directly
measurable usability subcharacteristics and, when
verifying the requirement, compute the actual val-
ues of 28 metrics. Worse, this might not even suf-
fice. For example, in a Web-based order tracking
system designed for use by untrained, casual us-
ers, an important usability subcharacteristic is the
ratio of failed or aborted tracking attempts versus
the total number of tracking operations. However,
ISO/IEC 9126 doesn’t include this characteristic as
a subcharacteristic.
So, the disadvantage of quantifying quality
requirements is equally obvious: it can be time-
consuming and expensive. Dogmatically applying
the rule “You shall quantify all quality require-
ments” might thus result in huge, unjustified re-
quirement costs.
One might try to limit the cost of quantifying
a quality requirement by quantifying only some
selected subcharacteristics. However, focusing on
achieving selected subcharacteristics and deliber-
ately neglecting all others can easily result in a sys-
tem that completely fails on the neglected ones, so
that the total quality level of the system with respect
to that requirement is lower than it could have been
without any quantification.
A means, not an end
At this point, we should remember that require-
Measuring Quality Requirements
Measurement is the principle of making perceived attributes of an entity
more objective by mapping attribute values to a scale such that the attribute’s
properties are mapped to corresponding properties of the scale. For exam-
ple, if we have an attribute value a1 that is lower than another attribute value
a2, the corresponding scale values s1 and s2 should be such that s1 < s2. A
procedure for measuring an attribute together with a suitable scale is called
a metric. Every scale has a type, which determines what we can do with the
scale values. For example, on an ordinal scale, comparison is the only opera-
tion we can apply to scale values. In contrast, if we want to compute percent-
ages and statistics such as mean and standard deviation, we need a ratio
scale. More detail is beyond this article’s scope. You’ll find a comprehensive
introduction to measurement and metrics in a textbook by Norman Fenton
and Shari Lawrence Pfleeger.
1
The new ISO/IEC 25020 standard provides a
reference model for software quality measurement.
2
Defining a metric for a given attribute enables us to quantitatively assess
attribute values—for example, comparing values or computing statistics. Intui-
tively, measuring a quality attribute requires at least a scale, a measurement
procedure, a lowest acceptable value, and a planned value.
3,4
For example, when we use this style, we can quantify the requirement
“Need less time to service an incoming request than today” (see example of
Jane’s volunteer drivers’ service in the text) as follows:
Attribute: Average time that a dispatcher needs to service a request
Scale: Seconds (type: ratio scale)
Procedure: Measure time required from picking up a request to receiv-
ing the schedule confirmation from the system; take the average over
20 service requests. Web requests that can be scheduled automatically
by the system count as zero.
Planned value: 50 percent less than reference value
Lowest acceptable value: 30 percent less than reference value
Reference value: Average service request time as of today
References
1. N.E. Fenton and S.L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, 2nd ed.,
PWS Publishing, 1998.
2.
ISO/IEC 25020: Software Engineering—Software Product Quality Requirements and Evalua-
tion (Square)—Measurement Reference Model and Guide, Int’l Organization for Standardiza-
tion, 2007.
3. T. Gilb,
Principles of Software Engineering Management, Addison-Wesley, 1988.
4. T. Gilb,
Competitive Engineering: A Handbook for Systems Engineering, Requirements Engi-
neering, and Software Engineering Using Planguage, Butterworth-Heinemann, 2005.
■
■
■
■
■
■
Authorized licensed use limited to: IEEE Xplore. Downloaded on December 11, 2008 at 03:39 from IEEE Xplore. Restrictions apply.