scispace - formally typeset
Open AccessProceedings ArticleDOI

Behaviour-Based Anomaly Detection of Cyber-Physical Attacks on a Robotic Vehicle

TLDR
A detection mechanism, which monitors real-time data from a large number of sources onboard the vehicle, including its sensors, networks and processing, and approach the problem as a binary classification problem of whether the robot is able to self-detect when and whether it is under attack.
Abstract
Security is one of the key challenges in cyberphysical systems, because by their nature, any cyber attack against them can have physical repercussions. This is a critical issue for autonomous vehicles; if compromised in terms of their communications or computation they can cause considerable physical damage due to their mobility. Our aim here is to facilitate the automatic detection of cyber attacks on a robotic vehicle. For this purpose, we have developed a detection mechanism, which monitors real-time data from a large number of sources onboard the vehicle, including its sensors, networks and processing. Following a learning phase, where the vehicle is trained in a non-attack state on what values are considered normal, it is then subjected to a series of different cyber-physical and physical-cyber attacks. We approach the problem as a binary classification problem of whether the robot is able to self-detect when and whether it is under attack. Our experimental results show that the approach is promising for most attacks that the vehicle is subjected to. We further improve its performance by using weights that accentuate the anomalies that are less common thus improving overall performance of the detection mechanism for unknown attacks.

read more

Content maybe subject to copyright    Report

Behaviour-based anomaly detection of cyber-physical attacks on a robotic vehicle
Anatolij Bezemskij, George Loukas, Richard J. Anthony, Diane Gan
Department of Computing & Information Systems
University Of Greenwich
London, United Kingdom
Email: {a.bezemskij, g.loukas, r.j.anthony, d.gan}@gre.ac.uk
Abstract—Security is one of the key challenges in cyber-
physical systems, because by their nature, any cyber attack
against them can have physical repercussions. This is a critical
issue for autonomous vehicles; if compromised in terms of their
communications or computation they can cause considerable
physical damage due to their mobility. Our aim here is to
facilitate the automatic detection of cyber attacks on a robotic
vehicle. For this purpose, we have developed a detection mech-
anism, which monitors real-time data from a large number of
sources onboard the vehicle, including its sensors, networks
and processing. Following a learning phase, where the vehicle
is trained in a non-attack state on what values are considered
normal, it is then subjected to a series of different cyber-
physical and physical-cyber attacks. We approach the problem
as a binary classification problem of whether the robot is
able to self-detect when and whether it is under attack. Our
experimental results show that the approach is promising for
most attacks that the vehicle is subjected to. We further
improve its performance by using weights that accentuate
the anomalies that are less common thus improving overall
performance of the detection mechanism for unknown attacks.
1. Introduction
Vehicular cyber security has traditionally focused on
passive attacks and especially on protecting the confiden-
tiality of communications between vehicles or vehicles and
smart infrastructures. However, over the last few years,
autonomous vehicles have become a routine target for ex-
perimental cyber attacks, as demonstrated as early as 2009
by the University of Washington [1], [2] and in numerous
blackhat conferences since then. As a result, there is a need
for protection systems appropriate for active attacks against
an autonomous vehicle’s integrity or availability, and the
corresponding impact on its actuation. Assuming that some
attacks do get through regardless of the preventive measures,
one needs to equip a vehicle with a mechanism to detect
when this happens and potentially alert an operator or trigger
some automated countermeasure. The focus of this work
is on the real-time detection of the existence of an attack
against a robot. We address both cyber-physical attacks,
which are security breaches in cyber space that have an
adverse effect in physical space, and physical-cyber attacks
which are the reverse [3]. For this, we have developed an
autonomous robotic vehicle with a variety of sensor and
communication technologies typically found in the industry.
To ensure that any solutions developed are highly prac-
tical, we have set the following requirements:
Detection should be real-time, so as to be able to
support rapid and effective countermeasures.
Detection should be carried out by the vehicle it-
self, so as to be applicable to autonomous vehicles
with limited or no communication with their human
operators.
Detection should not rely on the availability of
knowledge of previous attacks, so as to be applicable
to unknown attacks too.
We do not rely on attacks on cyber-physical systems being
frequent enough to allow for the gathering of a realistic body
of knowledge on their impact. To meet these requirements,
detection should be behaviour-based rather than knowledge-
based. To address the above requirements, we have produced
an onboard mechanism that monitors data related to cyber
(communication and computation) and physical (actuation
and sensing) features of the robot in real-time. During the
training phase, the robot learns the normal range for the
values of each feature monitored. In actual operation, it
tracks the cyber and physical features that are in an abnormal
state (beyond their learnt range) and accordingly reasons on
whether a vehicle is in an attack state or not. The overall
emphasis of the mechanism towards more tolerance for false
positives or more tolerance for false negatives is configured
by a sensitivity index, which determines the length of the
normal range considered by the robot. We further improve
on the detection accuracy achieved with this approach by
also utilising individual weights for each feature, which are
finetuned in a dedicated configuration phase.
1.1. Related Work
While very mature for conventional computer systems,
the field of intrusion detection is relatively new in the area of
cyber-physical systems, such as vehicles and mobile robots.
A relatively common approach is to use a human expert to
first specify the safe and unsafe states of the vehicle and

determine a large number of rules that cover all potential
states, in what is known as behaviour-specification intrusion
detection [4], [5]. Rules can also be determined through
a more automated learning phase without the involvement
of a human expert: The vehicle is subjected to a series
of different attacks, observing their impact and training a
machine learning system to recognise these. Examples of
such supervised learning approaches for the detection of
attacks against robotic vehicles can be found in [6], [7],
[8], where the rules are formed by a decision tree, which
takes into account both cyber and physical features. Real-
time capture of an attack’s physical impact, such as vibration
of the chassis due to repetitively entering and existing safe
mode during a denial of service attack, has been shown to
improve detection accuracy and latency.
When a vehicle does not operate in isolation, but belongs
to a team of vehicles, which can make similar observations
about their environment and each other, intrusion detection
can be based on the identification of misbehaviour of one
of the members of the team. There, reputation-based ap-
proaches [9] and voting algorithms [10] can prove very
useful. For instance, if one vehicle veers off the pre-defined
route or reports very different sensor data, this can be con-
sidered as an indication that it may have been compromised.
Most of the research presented above makes assumptions
that are largely unrealistic in the operational environment of
a cyber-physical system such as a robotic vehicle (whether
autonomous or not). Assuming that a new attack will look
like one that has been seen before is reasonable for con-
ventional computer networks, where millions of variations
of the same attacks can be seen in the same year. For
cyber-physical systems, this is less so, because attacks are
less common and have very a different impact depending
on the type of system targeted. As a result, knowledge-
based approaches, where the vehicle is trained to see specific
attacks perform poorly when they encounter new types of
attack. At the same time, assuming that a robotic vehicle
will belong to a team, where group observation can help
spot signs of cyber compromise can be unrealistic in many
operational environments.
Researchers have experimented with methods for detect-
ing anomalies, but usually only for a particular aspect of a
vehicle’s operation. An example for aircraft is the detection
of false automatic dependent surveillance-broadcast (ADS-
B) messages, used by aircraft to broadcast their position
to other aircraft and to air traffic control. Strohmeier et al.
[11] achieve detection by monitoring statistics regarding the
received signal strength (RSS), as it is assumed that false
signals would be coming from the ground and thus would
have different RSS than signals coming from aircraft. A
similar logic can be followed to protect autonomous vehicles
that rely on GPS signals, as, coming from satellites, legiti-
mate GPS signals are naturally much weaker than spoofed
signals that would come from a terrestrial source [12].
A first attempt to provide completely sensor-agnostic and
onboard intrusion detection that is applicable to unknown
threats and takes into account both cyber and physical
sources has been made in [13]. Here, we extend this work
considerably by providing a method to quantify the degree to
which a vehicle is likely to be under attack without relying
on a learning phase, and further improve it with a mecha-
nism that assigns weights to the different data sources. We
validate this approach with real-world experiments involving
a variety of normal and attack conditions.
2. Robotic Testbed System Design
Our testbed is a highly modular robotic vehicle devel-
oped from the ground up for the purposes of this research
(Figure 3). It contains a large variety of sensors, actuators
and communication channels widely used in the industry.
The latter include CAN, RS-485, WiFi and ZigBee.
Figure 1. High-level communication diagram
The various sub-systems are integrated such that system
components produce signals and feedback used by other
system components to change overall system behaviour.
Several components that are mentioned in Table 1 produce
instrumentation data which is used as cyber or physical
domain indicators. The combination of such indicators can
produce additional meta-data that can be used to identify a
particular behaviour of a system. All indicators that are used
are generalised and are treated as a data source for example
the compass bearing signal output represents the orientation
of the vehicle in degrees, but in an autonomous system it is
treated as a stream of numerical values without context or
units. Processing is distributed across the various embedded
processors on the testbed platform (Table 1). Overall the
system contains six processing nodes, five of which are
AVR-CAN development boards clocked at 16 MHz, and one
STK300 Kanda board powered by Atmel ATMega1281 chip
clocked at 8 MHz.
The vehicle is able to undertake a variety of autonomic
tasks, such as navigation based on the logical mission layer
that represents a sequence of steps given to the testbed.
Sensors allow the vehicle to navigate autonomously in an
environment using the compass bearing to keep track of

TABLE 1. EQUIPMENT INSTALLED IN THE ROBOTIC VEHICLE TESTBED
Feature Purpose
CAN bus Internal communication
ZigBee External communication
WiFi Media streaming
Compass Bearing Navigation correction
DC Motors Movement
Ultrasonic Rangers Collision avoidance
the direction, ultrasonic rangers for collision detection and
avoidance, and pitch and roll sensors to make direction
corrections and inform the system of environment volatility.
There is also a sensor that measures the temperature of
the heat sink connected to the on-board voltage regulators
which supply power for the camera and robotic arm. In this
way, the system is able to determine if these heavy-current-
drawing system components are in use. These sensors and
additional meta-data extraction allow automatic character-
isation of the real-time behavioural profile of the vehicle
whilst in operation.
To gather the data for off-line analysis, we use an ex-
ternal workstation. Sensor data from the vehicle is collected
and stored in a knowledge base. Communication between
the workstation and the vehicle is achieved using a dedicated
ZigBee network. The ZigBee connection also enables us to
transmit commands to the testbed (e.g. to initiate missions).
The camera is a self-contained unit; its audio and video feeds
are streamed using a standard WiFi protocol. An overview of
high-level communication architecture between workstation
and robotic testbed vehicle can be seen in Figure 1.
The robotic vehicle is capable of accepting both simple
remote commands in terms of navigation, camera streaming
and the operation of the robotic arm that is attached to it
and complex missions uploaded to it. For security purposes,
commands received are executed only if the sender is within
a list of authorised ZigBee nodes and the command is
in the correct format. The robotic vehicle testbed does
not send any commands to any external nodes within the
ZigBee network. The testbed will only periodically report
its instrumentation data to a verified connected workstation.
The instrumentation report periodicity is 1 s, due to the
low bandwidth ZigBee protocol and unique ZigBee ZE10
module behaviour. Therefore higher-rate sample aggregation
is performed on-platform on the sensor hosting nodes.
For communication between system components, the
testbed uses a CAN bus. This bus is used to share overall
sensor data from data sources, including additional meta-
data extracted during data analysis by the processing nodes.
The internal communication architecture is shown in Figure
2. This data is retransmitted to other nodes through gateways
and is collected at the reporting node which transmits data
to the workstation when appropriate.
The software structure of the robotic vehicle testbed uses
a layered architecture, which separates the different levels of
reasoning from the lowest physical sensor level, represented
by individual embedded nodes performing analog to digital
conversions interpreting signals into an understandable soft-
Figure 2. Internal Communication: gateways connect different subsystems
ware language. The next-higher level is the classification
layer where data is analysed using statistical analysis ap-
proaches, such as exponential smoothing to determine the
trends in the data. A level higher, we have an autonomous
module controller layer which controls actuating capabilities
based on the data received from the lower layers of the
model. The autonomic module controller layer is a set of
autonomic controllers that are carrying out their defined
tasks, such as robotic arm movement or navigational control.
A mission layer then collects knowledge from autonomic
controllers and evaluates if the expected mission goal has
been achieved. The layered software approach improves
flexibility and maintainability in terms of software devel-
opment for the robotic vehicle testbed, as all these layers
are implemented as a set of libraries that can be extended
further.
Figure 3. Robotic vehicle testbed
3. Experimental Environment
The training and testing phases of the experiments have
been conducted in the Queen Mary Building at Greenwich
University. The irregular surface of the uneven stone flooring
dents and lumps (Figure 4) provides the desired stochasticity

for the different data sources, as well as a challenging
environment for the mobility of the vehicle. At the same, it is
a controlled environment where we can ensure repeatability
of the experiments without any foreign objects or weather
modifying the parameters of each iteration. This also allows
us to identify the behavioural profile of an environment
based on the data source information. The corridor has a
set of inset door openings on either side, which facilitate
physical observation of the effects of different attacks and
of the periodic behaviour of the sensors (especially the
ultrasonic ones), as the vehicle passes by.
Figure 4. Experimental environment. An old corridor with irregular surface
and uneven stone flooring dents and lumps at the University of Greenwich.
The corridor is 28m long and the distance from wall to
wall is 2m and constant. The experiments were repeated
eight times to ensure that the collected data set is rep-
resentative and can be used for creation and evaluation
of the behavioural profile. The behavioural profile is built
using patterns of the variation and background noise in data
sources; mainly we are looking at the spikiness of the data
variations and the variety of deviations. The experimental
environment facilitates repeatability and contains static ele-
ments that can be used as guideline features during analysis
of gained data, but it also introduces significant stochastic
elements which are essential for understanding the normal
levels of noise and variability in sensor signals.
The experimental scenario evaluated in this paper is a
mission in which the robotic vehicle testbed has to reach the
end of the corridor using its own sensing capabilities. The
complexity of such a mission is not obvious. The uniqueness
of the flooring surface disrupts the direction of the vehicle,
forcing it to continuously adapt the speed of its motors and
its direction and ensure that it maintains a safe distance from
the walls during operation. The scenario was chosen due to
the structural uniqueness of the vehicle, and as such the
scenario exercises all sensor capabilities. The experiment is
organised in two phases. The first is a training phase, where
over several runs a learning data set is collected that allows
us to create a “normal” behavioural profile. The second
phase is to evaluate the recognition of this profile.
4. Methodology
4.1. Signature of normal behavioural profile
In [13], we have described how signatures are formed
and can be used for anomaly or threat identification. This
learning phase is shown as “L1” in Figure 5. It is based on an
initial signature generation to establish the normal behaviour
profile of the sensors on the system. The data from the
sensors is transformed into a generic data source format
that allows the system to reason about them identically.
The learning phase forms a normal behaviour profile based
on signature characteristics of each data source and forms
normal behaviour variation that is used during the validation
phase.
After the learning phase, dynamically detected values are
compared with learnt normal behavioural profile signatures.
The term “anomaly” is used to denote that a signal charac-
teristic has been measured to be outside its expected normal
range. The signature is formed of 11 characteristics which
facilitate learning the normal value range limits, as shown
in Table 2. Differences and deviations from the standard
deviation are called spikes and these characteristics can be
seen in Table 2. The validation phase, shown as “V” in
Figure 5, classified behaviour based on an overall anomaly
index represented by a number of anomalies in the system.
TABLE 2. SIGNATURE CHARACTERISTICS MONITORED IN REAL-TIME
ONBOARD THE VEHICLE
Value Type Characteristic
Raw
Minimum
Maximum
Exponential Smoothing
Minimum
Maximum
Lowest Difference
Highest Difference
Deviation Standard Deviation
Spike Areas
50% - 100%
100%-150%
150%-200%
Over 200%
Figure 5. Methodology work flow
Each deviation of a signature characteristic is counted to
represent an outgoing level of threat from the data source.
These deviations are summarized to produce an anomaly
index for the data source, this index represents the deviation
level.
4.2. Anomaly Weighing and Indicator Confidence
To strengthen the detection performance, we have intro-
duced an additional learning phase, shown as “L2” in Figure

5, which tunes the system by assigning weights to sources
according to their likelihood of appearing anomalous in
some normal scenarios too. The focus of this phase is
to learn the number of individual signature characteristic
anomalies that may be encountered in a non-attack condi-
tion, arising due to environmental noise.
Figure 6. Matrix of the anomalies identified. Each row corresponds to a
data source and each column to a signal characteristic measured for each
source. The colour coding indicates anomalies
In Figure 6, we demonstrate how we summarise anoma-
lies by taking the system data source signatures from ve
non-attack situations. Only two are shown here as a demon-
stration. To reduce the importance of anomalies that tend to
occur in a non-attack environment we calculate the weight of
each signature characteristic anomaly sample w (c
ij
) in the
following way: for a number of n scenarios S
`
, 1 ` n,
we take the complement of the mean of each signature
characteristic anomaly sample c
ij(`)
, where i represents a
data source and j represents a signature characteristic:
w (c
ij
) = 1 c
i,j
= 1
P
n
`=1
c
ij(`)
n
which produces the weight of an anomaly sample for the
signature characteristic. This allows the system to derive a
more precise score taking into account the anomalies that
tend to be less indicative of an attack as they persist in a
non-attack conditions.
The calculated value represents the weight of a signature
characteristic. If the system learns that a particular signature
characteristic has a high probability of anomaly occurrence
in a non-attack mission scenario then the importance of such
anomaly is reduced. This generates a lower anomaly index
for the data source’s signature. The sum of all weighted
anomalies generates an overall anomaly index that is used
as a reference in the intrusion detection mechanism. To
improve the methodology further we introduce a dynamic
variable that acts as a controller of the “normality” threshold.
The “normality” variation is formed during the “L2” phase.
The overall anomaly index generated from the non-attack
experiments is used as a mean reference and the dynamic
variable controls the variation. This allows the detection
mechanism to identify anomalous behaviour in two cases:
when multiple anomalies are detected generating a high
overall anomaly index, as well as when anomalies are not
detected, therefore generating a low overall anomaly index.
An overview of a work flow of the intrusion detection
mechanism can be seen in Figure 7.
Figure 7. Intrusion detection mechanism
When the learnt weights scheme is applied to detected
characteristic anomalies in several attack and mechanical
failure experiments, the anomalies that have higher weight
are accentuated and the importance of anomalies that tend
to occur during a non-attack mission scenario are reduced.
This reduces the anomaly score of the system in a non-
attack scenario and ensures the score increases when abnor-
mal circumstances occur on characteristics that should not
otherwise change during non-attack experiments.
Figure 8. Matrices of anomalies spotted for each of the incidents in the
experiments (compass manipulation, rogue node, replay packet injection,
wheel failure)

Citations
More filters
Journal ArticleDOI

Cloud-Based Cyber-Physical Intrusion Detection for Vehicles Using Deep Learning

TL;DR: A mathematical model is developed to determine when computation offloading is beneficial given parameters related to the operation of the network and the processing demands of the deep learning model, and the more reliable the network, the greater the reduction in detection latency achieved through offloading.
Journal ArticleDOI

Intrusion Detection Systems for Intra-Vehicle Networks: A Review

TL;DR: This paper provides a structured and comprehensive review of the state of the art of the intra-vehicle intrusion detection systems (IDSs) for passenger vehicles and presents outstanding research challenges and gaps in intra-Vehicle IDS research.
Journal ArticleDOI

A taxonomy and survey of cyber-physical intrusion detection approaches for vehicles

TL;DR: This paper presents a classification and survey of intrusion detection systems designed and evaluated specifically on vehicles and networks of vehicles to help identify existing techniques that can be adopted in the industry, along with their advantages and disadvantages, as well as to identify gaps in the literature.
Journal ArticleDOI

A taxonomy of cyber-physical threats and impact in the smart home

TL;DR: This work classifies applicable cyber threats according to a novel taxonomy, focusing not only on the attack vectors that can be used, but also the potential impact on the systems and ultimately on the occupants and their domestic life.
Journal ArticleDOI

BRIoT: Behavior Rule Specification-Based Misbehavior Detection for IoT-Embedded Cyber-Physical Systems

TL;DR: The key concept of the approach is to model a system with which misbehavior of an IoT device manifested as a result of attacks exploiting the vulnerability exposed may be detected through automatic model checking and formal verification, regardless of whether the attack is known or unknown.
References
More filters
Proceedings ArticleDOI

Experimental Security Analysis of a Modern Automobile

TL;DR: It is demonstrated that an attacker who is able to infiltrate virtually any Electronic Control Unit (ECU) can leverage this ability to completely circumvent a broad array of safety-critical systems and present composite attacks that leverage individual weaknesses.
Proceedings Article

Comprehensive experimental analyses of automotive attack surfaces

TL;DR: This work discovers that remote exploitation is feasible via a broad range of attack vectors (including mechanics tools, CD players, Bluetooth and cellular radio), and further, that wireless communications channels allow long distance vehicle control, location tracking, in-cabin audio exfiltration and theft.
Journal ArticleDOI

Behavior Rule Specification-Based Intrusion Detection for Safety Critical Medical Cyber Physical Systems

TL;DR: This work proposes a methodology to transform behavior rules to a state machine, so that a device that is being monitored for its behavior can easily be checked against the transformed state machine for deviation from its behavior specification.
Journal ArticleDOI

Adaptive Intrusion Detection of Malicious Unmanned Air Vehicles Using Behavior Rule Specifications

TL;DR: This paper investigates the impact of reckless, random, and opportunistic attacker behaviors on the effectiveness of the behavior rule-based UAV IDS (BRUIDS) which bases its audit on behavior rules to quickly assess the survivability of the UAV facing malicious attacks.
Book

Cyber-Physical Attacks: A Growing Invisible Threat

George Loukas
TL;DR: The book presents the growing list of harmful uses of computers and their ability to disable cameras, turn off a buildings lights, make a car veer off the road, or a drone land in enemy hands.
Related Papers (5)