scispace - formally typeset
Open AccessBook ChapterDOI

Context Inference for Mobile Applications in the UPCASE Project

Reads0
Chats0
TLDR
In this paper, the authors describe a system architecture from raw data acquisition to feature extraction and context inference, which is based on a decision tree to learn and identify contexts automatically and dynamically at runtime.
Abstract
The growing processing capabilities of mobile devices coupled with portable and wearable sensors have enabled the development of context-aware services tailored to the user environment and its daily activities. The problem of determining the user context at each particular point in time is one of the main challenges in this area. In this paper, we describe the approach pursued in the UPCASE project, which makes use of sensors available in the mobile device as well as sensors externally connected via Bluetooth. We describe the system architecture from raw data acquisition to feature extraction and context inference. As a proof of concept, the inference of contexts is based on a decision tree to learn and identify contexts automatically and dynamically at runtime. Preliminary results suggest that this is a promising approach for context inference in several application scenarios.

read more

Content maybe subject to copyright    Report

Context Inference for Mobile Applications in the
UPCASE Project
?
Andr´e C. Santos
1
, Lu´ıs Tarrataca
1
, Jo˜ao M. P. Cardoso
2
, Diogo R. Ferreira
1
,
Pedro C. Diniz
1
, and Paulo Chainho
3
1
IST Technical University of Lisbon, Taguspark, Oeiras, Portugal
2
FEUP Faculty of Engineering, University of Porto, Portugal
3
PT Inovao S.A., Portugal
Corresp onding author: jmpc@acm.org
Abstract. The growing processing capabilities of mobile devices cou-
pled with portable and wearable sensors have enabled the development
of context-aware services tailored to the user environment and its daily
activities. The problem of determining the user context at each particu-
lar point in time is one of the main challenges in this area. In this paper,
we describe the approach pursued in the UPCASE project, which makes
use of sensors available in the mobile device as well as sensors externally
connected via Bluetooth. We describe the system architecture from raw
data acquisition to feature extraction and context inference. As a proof of
concept, the inference of contexts is based on a decision tree to learn and
identify contexts automatically and dynamically at runtime. Preliminary
results suggest that this is a promising approach for context inference in
several application scenarios.
Key words: Context-aware services, context inference, smartphones,
wearable sensors, decision trees.
1 Introduction
There is a growing desire of telecommunication operators to increase traffic vol-
ume even further by offering value-added services to customers in addition to
traditional voice and data communication. These services can be enabled or
disabled depending on the specific user context. For example, when caught in
rush-hour traffic, a service could automatically estimate the delay for the user
to reach a child’s school. In case of excessive delay, it would notify an alternate
adult for pick-up. Other examples include anti-theft or near-emergency services.
Using sensors it might be possible to determine whether an elderly has fallen at
home and has been immobile for some time thus triggering an emergency call.
To enable such kind of services, mobile devices must be able to clearly iden-
tify specific contexts the user goes through [12, 27]. For this purpose, mobile
devices must include sensors that yield data such as position, lighting or sound
?
This work was partially funded by PT Inovao S.A.

2 Andr´e C. Santos et al.
conditions from which user contexts can be determined. Accurate context in-
ference, however, is notoriously difficult as there exist various sources of data
signals with possibly very distinct patterns which need to be captured and pro-
cessed in a timely fashion. Furthermore, the amount of raw sensor data can
overwhelm the resources of even the most sophisticated mobile devices. A pos-
sible solution would require each mobile device to acquire and transmit sensor
data to a centralized server for processing. Although conceptually simple, this
centralized solution is infeasible. It would require constant communication with
a centralized server as most sensors need to operate in real-time. This in turn
would require excessive computing power for each device to constantly transmit
a possible high volume of sensor data. On the server side fusing sensor data from
millions of devices would require a tremendous computing power. Instead, each
mobile unit should be able to infer user context by processing data originat-
ing from its sensors and possibly from communicating with network services to
obtain additional information such as traffic or weather conditions.
In this paper, we describe the architecture and operation of a proof-of-concept
system for context inference based on a smartphone augmented with an ar-
ray of sensors connected via Bluetooth
2
. This system is part of the UPCASE
project (User-Programmable Context-Aware Services), an industry-funded R&D
project. The architecture of the system consists of three main layers: (1) the ac-
quisition layer which is concerned with sensor data acquisition and preprocessing,
(2) the feature extraction layer which assigns specific categories to the prepro-
cessed sensor data, and (3) the context inference layer which uses decision-tree
induction techniques to uncover the user context.
This paper is organized as follows. We begin with an overview of related
work and describe the developed system. Next, we present the various sensors
used in connection with the smartphone and the exp erimental results of context
inference in a simple scenario of daily activities. Lastly, we describe two poten-
tial scenarios of application of the system developed in identifying meaningful
contexts, namely in elderly care and emergency management scenarios.
2 Background and Related Work
Context identification as been recognized as an enabling technology for proactive
applications and context-aware computing [4, 11]. Sensor networks can be used
to capture intelligence (see, e.g., the e-SENSE
3
project [17]), providing sensing
capabilities from the environment and opening opportunities for context-aware
computing.
Early context-aware applications were predominantly based on user location
defined as typical user places (e.g., ”at home”, ”in a museum”, ”in a shopping
center”). Projects such as GUIDE [3] and Cyberguide [1] addressed the use of
information about location and situation to guide the user when visiting touristic
2
The Official Bluetooth Technology Info Site (www.bluetooth.com/bluetooth).
3
http://www.ist-esense.org/

Context Inference for Mobile Applications in the UPCASE Project 3
city spots. Recently, researchers have studied techniques to identify a richer set
of contexts or activities. These include simple user activities (e.g., ”walking”,
”running”, ”standing”), environment characteristics (e.g., ”cold”, ”warm”), or
even emotional condition of the user (e.g., ”happy”, ”sad”, ”nervous”).
In the SenSay [23] project, researchers developed a smartphone prototype
able to exploit the user context to improve its usability. For example, if the user
is occupied and wishes not to be interrupted, the smartphone can answer/reply
automatically using an SMS. The SenSay prototype uses a smartphone and a
sensor unit consisting of a 3-axis accelerometer, two microphones (one to capture
sound from the environment and the other to capture voice from the user), and a
light sensor. The prototype makes use of simple techniques such as performing an
average of sensor readings over a given window and applying a numeric threshold
to identify each activity.
Generally, the identification of contexts is done in stages. Processing raw data
from sensors may require a wide variety of techniques such as noise reduction,
mean and variance calculation, time- and frequency-domain transformations, es-
timation of time series or sensor fusion. Data collected from sensors is catalogued
(a process known as feature extraction) and the context-inference stage makes
use of features rather than raw data. Context inference has been addressed us-
ing different techniques such as Kohonen Self-Organizing Maps (KSOMs) [14],
k-Nearest Neighbor [15], Neural Networks [20], and Hidden Markov Models
(HMMs) [24]. Some approaches even combine several of these techniques, as
in [13].
Regarding the inference of user activities such as ”walking” or ”running”,
they use a plethora of approaches, ranging from simple processing steps and
threshold operations [8, 22, 27] to the use of neural networks as the clustering
algorithm [20]; or even using non-supervised time-series segmentation [9]. As an
example, the work presented in [12] infers activities such as ”walking”, ”run-
ning”, ”standing”, and ”sitting” with a single 3-axis accelerometer claiming an
accuracy of 96%.
In our approach, we extract signal features using techniques similar to those
described in [22, 27]. For context inference we combine signal-processing and
machine-learning techniques, using decision trees [18] to fuse features and deter-
mine user activities. All data preprocessing and context inference is performed on
the mobile device. The results are sent to a server in order to monitor activities,
thus allowing for more advanced and possibly non-local context inferences.
3 The UPCASE Project
The UPCASE project aims at uncovering user contexts using a set of sensors
connected to the user’s mobile phone. These sensors can be embedded into per-
sonal clothes or items such as backpacks or purses. Sensors include accelerome-
ters, light, sound, and temperature sensors, and also virtual sensors to acquire
information such as time of day or approximate location via external services.

4 Andr´e C. Santos et al.
A goal of the project includes the development of robust algorithmic ap-
proaches to accurately determine user context. Specifically, we include supervised-
based learning techniques. During a training phase the system collects a suffi-
cient number of data samples for context derivation using decision-tree based
techniques. After this training phase the system operates autonomously and
unobtrusively automatically deriving contexts.
3.1 System sensors and prototype
Figure 1(a) depicts the main system components used in the prototype we have
developed: the mobile device, a sensor node, and a set of sensors. The black box
contains the batteries (the 1-Euro coin is shown to provide an idea of scale).
Figure 1(b) depicts an experimental setup where the components are embed-
ded in a backpack used for testing purposes. In this early prototype, we have
deliberately not concealed the sensors to experiment with sensor sensitivity to
environment conditions. The prototype has been tested on a backpack and also
on a vest, ensuring that the sensors experience the same conditions as the user.
The only requirement is that some sensors must be exposed to allow for more
correct sound, temperature and light measurements.
(a) (b)
Fig. 1. The system components (a) and an experimental setup in a backpack (b).
The system prototype comprises a Sony Ericsson W910i smartphone
4
and
a BlueSentry external sensor node
5
. A sound sensor
6
, a temperature sensor
7
,
and a light sensor
8
are wired to the sensor node. The BlueSentry sensor node
communicates with the smartphone via Bluetooth to provide sensor readings,
thus avoiding the need for physical connection b etween the two. In addition
4
http://www.sonyericsson.com/cws/products/mobilephones/overview/w910i
5
http://www.rovingnetworks.com/bluesentry.htm
6
http://www.inexglobal.com/products.php?model=zxsound
7
http://www.phidgets.com/products.php?product id=1124
8
http://www.phidgets.com/products.php?product id=1127

Context Inference for Mobile Applications in the UPCASE Project 5
to these, there are two other sensors being used: the internal accelerometer of
the smartphone and a virtual time sensor to provide the time of day. It is also
possible to connect a second accelerometer to the BlueSentry node.
3.2 System sensors and prototype
The overall system architecture is presented in figure 2. The application layer has
been developed using Java ME platform
9
, a technology that is widely used due to
its recognized portability across many mobile phone devices. At the lowest level,
the sensors gather data from the environment and provide it as raw analog signals
to the sensor node, which in turn converts them to digital and transmits the
digital representations to the mobile phone via Bluetooth. The mobile phone runs
a proprietary operating system (OS) which supports the execution of Java code.
We have developed a MIDP (Mobile Information Device Profile) application that
acquires raw sensor data from the BlueSentry node and supports the extraction
of features to be used in the upper layers of the system architecture.
Context Recognition/Identification/Inference Engine (CIE)
Java Virtual Machine (J2ME)
Operating System (e.g., Symbian)
Mobile Device (e.g., Smartphone, PDA)
Pre-defined
Sensors
(XML)
Pre-Processing Engine (Features’ Extraction) (PPE)
Context Rules
and
Associated
Activities
Context
MIDP
JSR 256 Mobile
Sensor API
JSR 82
Bluetooth API
C
M
C
M
C
M
...
Publisher
Context Server
(e.g., High-Level Context
Identification Engine)
C
M
Context
Examples
Sensors Data Acquisition
Fig. 2. The UPCASE system architecture.
9
http://java.sun.com/javame/

Citations
More filters
Journal ArticleDOI

Preprocessing techniques for context recognition from accelerometer data

TL;DR: This article presents a survey of the techniques for extracting specific activity information from raw accelerometer data, and presents experimental results to compare and evaluate the accuracy of the various techniques using real data sets collected from daily activities.
Journal ArticleDOI

A survey on smartphone-based systems for opportunistic user context recognition

TL;DR: The typical architecture of a mobile-centric user context recognition system as a sequential process of sensing, preprocessing, and context recognition phases is introduced and the main techniques used for the realization of the respective processes during these phases are described.
Journal ArticleDOI

Security, Privacy, and Incentive Provision for Mobile Crowd Sensing Systems

TL;DR: This paper proposes a comprehensive security and privacy-preserving MCS architecture that is resilient to abusive users and guarantees privacy protection even against multiple misbehaving and intelligent MCS entities (servers).
Journal ArticleDOI

Providing user context for mobile and social networking applications

TL;DR: Experimental results in a real-world setting suggest that the proposed solution is a promising approach to provide user context to local mobile applications as well as to network-level applications such as social networking services.
Proceedings ArticleDOI

SPPEAR: security & privacy-preserving architecture for participatory-sensing applications

TL;DR: This work addresses the seemingly contradicting requirements of a secure and accountable PS system that preserves user privacy, and enables the provision of incentives to the participants with the SPPEAR architecture.
References
More filters
Book

C4.5: Programs for Machine Learning

TL;DR: A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting.
Journal ArticleDOI

Induction of Decision Trees

J. R. Quinlan
- 25 Mar 1986 - 
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.

Programs for Machine Learning

TL;DR: In his new book, C4.5: Programs for Machine Learning, Quinlan has put together a definitive, much needed description of his complete system, including the latest developments, which will be a welcome addition to the library of many researchers and students.
Journal ArticleDOI

Programs for machine learning Part I

TL;DR: A proposed schema and some detailed specifications for constructing a learning system by means of programming a computer are given, trying to separate learning processes and problem-solving techniques from specific problem content in order to achieve generality.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What are the contributions in "Context inference for mobile applications in the upcase project" ?

In this paper, the authors describe the approach pursued in the UPCASE project, which makes use of sensors available in the mobile device as well as sensors externally connected via Bluetooth. The authors describe the system architecture from raw data acquisition to feature extraction and context inference. Preliminary results suggest that this is a promising approach for context inference in several application scenarios. 

Processing raw data from sensors may require a wide variety of techniques such as noise reduction, mean and variance calculation, time- and frequency-domain transformations, estimation of time series or sensor fusion. 

Being able to gather information about user context is a key enabling factor for a new generation of context-aware services and applications. 

The SenSay prototype uses a smartphone and a sensor unit consisting of a 3-axis accelerometer, two microphones (one to capture sound from the environment and the other to capture voice from the user), and a light sensor. 

The context identified via the decision tree is stored in a buffer which gathers a finite number of contexts and returns the context that has been recorded more often within a certain time window. 

Regarding the inference of user activities such as ”walking” or ”running”, they use a plethora of approaches, ranging from simple processing steps and threshold operations [8, 22, 27] to the use of neural networks as the clustering algorithm [20]; or even using non-supervised time-series segmentation [9]. 

The applications exploit not only the ability of the system to identify user contexts, but also of making context information available to other users via a context server. 

The JSR-256 API allows developers to retrieve data not only from embedded sensors but also from sensors connected via infrared, bluetooth and GPRS. 

Using sensors it might be possible to determine whether an elderly has fallen at home and has been immobile for some time thus triggering an emergency call. 

Emergency management is the discipline that deals with preparing for, preventing, responding to, and recovering from emergency situations [6]. 

In this paper the authors addressed the problem of distinguishing between a number of daily activity contexts by means of a prototype proof-of-concept system developed in the context of the UPCASE project. 

These include simple user activities (e.g., ”walking”, ”running”, ”standing”), environment characteristics (e.g., ”cold”, ”warm”), or even emotional condition of the user (e.g., ”happy”, ”sad”, ”nervous”). 

The BlueSentry sensor node communicates with the smartphone via Bluetooth to provide sensor readings, thus avoiding the need for physical connection between the two. 

The prototype makes use of simple techniques such as performing an average of sensor readings over a given window and applying a numeric threshold to identify each activity.