scispace - formally typeset

Proceedings ArticleDOI

Adding some smartness to devices and everyday things

07 Dec 2000-pp 3-10

TL;DR: This work discusses augmentation of mobile artifacts with diverse sets of sensors and perception techniques for awareness of context beyond location, and reports experience from two projects, one on augmenting mobile phones with awareness technologies, and the other on embedding of awareness technology in everyday non-digital artifacts.

AbstractIn mobile computing, context-awareness indicates the ability of a system to obtain and use information on aspects of the system environment. To implement context-awareness, mobile system components have to be augmented with the ability to capture aspects of their environment. Recent work has mostly considered location-awareness, and hence augmentation of mobile artifacts with locality. We discuss augmentation of mobile artifacts with diverse sets of sensors and perception techniques for awareness of context beyond location. We report experience from two projects, one on augmentation of mobile phones with awareness technologies, and the other on embedding of awareness technology in everyday non-digital artifacts.

Topics: Context awareness (63%), Mobile computing (61%), Mobile telephony (53%), Location awareness (53%)

Summary (4 min read)

1. Introduction

  • It is now widely acknowledged that some awareness of the context in which mobile systems are used can produce added value and foster innovation in many application domains.
  • Secondly, components may be equipped with explicit location sensors, i.e. receivers for specific location services, such as GPS.
  • And their focus in this paper is on augmentation of mobile system components for awareness of context beyond location.the authors.
  • Diverse sets of sensors and perception techniques are integrated to the end of shifting complexity in contextawareness from algorithmic level to architectural level.
  • In the subsequent sections, the authors will briefly discuss related work on sensor-augmented mobile artifacts, and then report experience first from the TEA project and secondly from Mediacup work.

3. TEA - an add-on device for contextawareness

  • The general motivation underlying the TEA project is to make personal mobile devices smarter.
  • The assumption is that the more a device knows about its user, its environment and the situations in which it is used the better it can provide assistance.
  • The cornerstones of the TEA device concept are: Integration of diverse sensors, assembled for acquisition multi-sensor data independently of any particular application.
  • Implementation of hardware, i.e. sensors and processing environment, and software, i.e. methods for computing situational context from sensor data, in an embedded device A specific objective underlying sensor integration is to address the kind of context that can not be derived from location information at all, for example situations that can occur anywhere.
  • The aim is to derive more context from a group of sensors than the sum of what can be derived from individual sensors.

3.1. TEA architecture

  • TEA is based on a layered architecture for sensor-based computation of context as illustrated in figure 1, with separate layers for raw sensor data, for features extracted from individual sensors (‘cues’), and for context derived from cues.
  • The data supplied by sensors can be very different, ranging form slow sensors that supply scalars (e.g. temperature sensor) to fast and complex sensors that provide a large amount of more or less structured data (e.g. a camera or a microphone); also the update time varies from sensor to sensor.
  • This way, the cue layer strictly separates the sensor layer and context layer which means context can be modeled in abstraction from sensor technologies and properties of specific sensors.
  • Again, the architecture does not prescribe the methods for calculation of context from cues; rule-based algorithms, statistical methods and neural networks may for instance be used.
  • The context calculation, i.e. the reasoning about cues to derive context, may be described explicitly, e.g. when cues are known to be relevant indicators of a certain real world situation, or implicitly in methods that learn context from example data.

3.2. Initial exploration of the approach

  • To study the TEA approach, the authors have developed two generations of prototype devices and used them for exploration of multi-sensor data, and for a validation of TEA as add-on device for mobile phones.
  • The TEA device was developed in two generations.
  • The first generation device was developed for exploration of a wide range of sensors and their contribution to contextawareness.
  • For this study a number of situations that the authors considered relevant for personal mobile devices were defined (e.g. user is walking, user is in a conversation, other people are around, user is driving a car, etc.).
  • The data was then subjected to statistical analysis to determine for each sensor or sensor group whether its inclusion increased the probability of recognizing situations.

3.3. Prototype implementation and validation

  • The initial exploration of sensors and their contribution to awareness of typical real-world situations served to inform development of the second generation device optimized for smaller packaging, and shown in figure 2.
  • The sensors are read by a microcontroller, that also calculates the cues and in some applications also the contexts.
  • Typical cues for audio that are calculated on the fly are the number of zero crossing of the signal in a certain time (indicator of the frequency) and number of direction changes of the signal (together with the zero crossings this is a indicator of the noise in the audio signal).
  • The prototype is independent of nay specific host and has been used in conjunction with a palmtop computer, a wearable computer and mobile phones.
  • The TEA device has been added to a mobile phone to automate activation of such profiles which otherwise have be activated manually by the user.

3.4. Application in mobile telephony

  • An interesting application domain for context-aware mobile phones as enabled by TEA is the sharing of context between caller and callee.
  • To study context-enhanced communication, the authors have implemented the WAP-based application “context-call”.
  • The application however does not establish the call straightaway but instead looks up the context of the callee and provides this information to the caller.

3.5. Discussion of TEA experience

  • The authors experience gathered in the TEA project supports the case for investigation of context beyond location, and for fusion of diverse sensors as approach to obtain such context.
  • The authors have used the approach for obtaining strictly location-independent context such as “in a meeting”, “in a conversation”, “user is walking” which can not be derived from location information.
  • This initial experience is valuable, however it is clearly not sufficient to derive any methodology for systematic application of sensor fusion for context-aware applications.
  • From this experience the authors can derive some indication as to which sensors are of particular interest for the overall objective of capturing real world situation.
  • In addition the authors found that perception can be improved by using not just diverse sensors but also multiple sensors of the same kind, in particular microphones and light sensors with different orientation.

4. Mediacup – embedding awareness technology in everyday artifacts

  • The Mediacup project was conducted in parallel to TEA, and while also investigating embedded awareness technology it is motivated differently.
  • TEA is about making artifacts smarter, i.e. to improve the functionality the artifact offers their user.
  • In contrast, the Mediacup project is about using artifacts to collect context information transparently, i.e. without changing the function and use of the artifact.
  • The core idea is that by embedding awareness technology in the everyday things people use the authors can obtain context on everyday activity so to speak at the source.
  • This approach assumes a distributed system in which some artifacts are augmented to collect context information, while other artifacts are computationally augmented to use such context.

4.1. Aware artifacts model

  • The context-awareness model investigated in the Mediacup project is based on the following concepts: Artifacts are augmented with an awareness of their own local context.
  • To this end artifacts are equipped with sensors but also with a processing environment and software for autonomous calculation of artifactspecific context from sensor data.
  • Artifacts broadcast their context in their local environment.
  • To this end aware artifacts are augmented with basic communication capabilities.
  • Any applications, appliances or information artifacts in the environment can use the locally available context, without further knowledge of the artifacts from which the context originate.

4.2. Mediacup – Awareness embedded in coffee cups

  • For exploration of the aware artifacts model the authors have augmented coffee cups representing non-digital everyday artifacts with awareness technology.
  • The Mediacups, as the authors call the augmented mugs, contain hardware and software for sensing, processing and communicating the state of the cup as context information.
  • Also, the adaptation speed of the sensor is very slow, and therefore it is read only every two seconds.
  • The transceivers are based on HP’s HSDL 1001 IrDA chip and have a footprint of about 1,5m².
  • They are connected through a CAN bus (car area network) and a gateway to the local ethernet, in which collected context is broadcast in UDP packets.

4.3. Experience from design and use

  • Like TEA, the Mediacup project served to gather extensive experience with sensor-based context- awareness.
  • The Mediacup provides substantial experience on different issues, i.e. on the embedding of awareness technology in ‘unpowered’ artifacts, on issues surrounding transparency of technology, and on a paradigm shift in use of sensors for context-awareness.
  • The used microcontroller runs with a reduced clock speed of only 1 MHz; this reduces the power consumption to below 2mA at 5.5V in processing mode.
  • First fitting in a battery that runs for the live time of the cup or second recharging the cup with no additional attention of the user.
  • Another example, that came up with use experience with an early battery-powered prototype, was that power provision needs to be transparent.

5. Discussion and conclusion

  • In the TEA and Mediacup projects the authors have gathered substantial experience with sensor-based contextawareness and embedding of awareness technology in mobile artifacts.
  • The authors have gained important insights into sensor fusion for awareness of situational context, into architectural issues, into embedded design of awareness technology, and into a new perspective on context-enabled environments and applications.
  • The authors work to date was not specifically focussed on architectural issues.
  • However their experience highlights substantial challenges for perception techniques to perform in low-end computing environment.
  • The aware artifacts model is a first exploration in this direction, studying a shift from context-aware applications with sensor periphery to dynamic systems of specialized appliances and artifacts, some of which are augmented to capture context while others are augmented to use context.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Adding Some Smartness to Devices and Everyday Things
Hans-W. Gellersen, Albrecht Schmidt and Michael Beigl
TecO, University of Karlsruhe
Vincenz-Prießnitz-Str. 1, 76131 Karlsruhe, GERMANY
Phone +49 (721) 6902-49
{hwg | albrecht | michael}@teco.edu
Abstract
In mobile computing, context-awareness indicates the
ability of a system to obtain and use information on
aspects of the system environment. To implement context-
awareness, mobile system components have to be
augmented with the ability to capture aspects of their
environment. Recent work has mostly considered location-
awareness, and hence augmentation of mobile artifacts
with locality. In this paper we discuss augmentation of
mobile artifacts with diverse sets of sensors and
perception techniques for awareness of context beyond
location. We report experience from two projects, one on
augmentation of mobile phones with awareness
technologies, and the other on embedding of awareness
technology in everyday non-digital artifacts.
1. Introduction
It is now widely acknowledged that some awareness of
the context in which mobile systems are used can produce
added value and foster innovation in many application
domains. In mobile computing, the notion of context is
generally used in reference to aspects of the environment
in which a mobile system operates and to which the
system might adapt or respond with appropriate behavior.
While context is an open-ended concept, it is commonly
associated with straightforward aspects in mobile system
environments such as location of users, whereabouts of
system components, local availability of resources and
such like.
To facilitate awareness of context in mobile systems,
some system components have to be augmented with the
ability to capture aspects of the system environment, by
way of sensing or communicating. Often this is just one
component, for example a personal mobile device for
context-aware application access; in other systems this
may be many components, for example mobile physical
objects with location tags to assert overall system context.
Either way, each piece of context that enters a distributed
mobile system does so through an appropriately
augmented system component. Our concern in this paper
is how system components can be augmented
appropriately, i.e. how context-awareness can be added to
mobile devices and artifacts. The research we report is
based on a device-centric view, in which context is
primarily associated with a device. For our discussion it is
secondary, that context may also be associated with the
user of a mobile device, or with applications that may run
on the device or elsewhere in a distributed mobile system.
Most of the context-aware mobile systems discussed to
date consider location as context, and from a device-
centric perspective they are based on adding location-
awareness to one or many of their system components.
Three general approaches can be distinguished. First there
are systems in which components utilize the mobile
communications infrastructure to obtain location
information, for example the cell-of-origin in cell-based
communications. For example, the GUIDE system for
tourists in Lancaster employs mobile computers that
derive their location from a WaveLAN network [3].
Secondly, components may be equipped with explicit
location sensors, i.e. receivers for specific location
services, such as GPS. For example, the stick-e-note
system for context-aware information access in fieldwork
is based on palmtops augmented with GPS receivers [9].
Thirdly, components may be augmented in ways that
allows surrounding infrastructure to assert their location.
In this case, components strictly speaking have no
awareness themselves but it is their augmentation that
enables awareness. Examples are name tags in the Active
Badge system [6], and the palmsize ParcTab terminals
[12], both augmented with infrared diodes that emit
signals from which the transceiver infrastructure derives
location.
Location is a rich concept, and often it is not the
location as such but also information associated with
locations that is exploited in location-aware mobile
systems. However we would argue that there is more to
context than we can capture through location, and our
focus in this paper is on augmentation of mobile system
components for awareness of context beyond location.
More specifically, we investigate the use of diverse sets of
sensors in mobile system components for context-
awareness. We report experience from two research
projects on sensor-based context-awareness, TEA and
Mediacup. The TEA project investigates Technologies for
Enabling Awareness and their application in mobile

telephony [13]. The Mediacup project studies capture and
communication of context in everyday environments [2].
The novel issues investigated in these projects are the
integration of diverse sensors and perception techniques,
and the embedding of autonomous awareness in mobile
artifacts.
Diverse sets of sensors and perception techniques are
integrated to the end of shifting complexity in context-
awareness from algorithmic level to architectural level.
This is done by considering deliberately simple sensors
and feature extraction methods as opposed to expensive
hardware and algorithms. Advanced context-awareness is
then achieved through fusion of information obtained from
diverse sensors, employing suitable architectures. The
approach somewhat contrasts for example vision-based
approaches that tend to be compute-intensive, and is
geared toward implementation with embedded
technologies.
The second issue highlighted in the work we report is
the embedding of autonomous awareness in mobile
artifacts. It is straightforward to add awareness technology
sensors and perception algorithms to general purpose
computing platforms such as laptops, personal digital
assistants and wearable computers. Both the TEA and the
Mediacup project however investigate the adding of
awareness technology to artifacts that do not provide any
platform ready for extension with hardware and software.
In the case of TEA, the artifact considered is a mobile
phone, which is based on digital technology but still self-
contained and not open for extension. In the Mediacup
project the challenge is taken further by considering an
ordinary coffee cup, representing everyday artifacts. In
both projects, artifacts have been augmented and studied
in test environments.
In the subsequent sections, we will briefly discuss
related work on sensor-augmented mobile artifacts, and
then report experience first from the TEA project and
secondly from Mediacup work. This will be followed by
discussion that sums up our experience with adding
context-awareness to mobile artifacts, also pointing out
issues and directions for further research.
2. Related work
In a wide range of projects mobile artifacts have been
augmented to enable awareness of their location. While
three general approaches can be distinguished as discussed
in the introduction, artifacts fall actually into two groups.
First artifacts that have general-purpose computing
platforms ranging from smallest-scale, consider for
instance ParcTabs, to high-end wearable PCs. Secondly
artifacts explicitly designed for being located such as the
Active Badge infrared sender, and the Active Bat
ultrasound emitter. Our work in TEA and Mediacup in
contrast is concerned with augmenting artifacts that are
neither general-purpose computing platforms nor non-
functional beyond support of locality.
In handheld computing, there is some related work on
adding sensor technologies beyond location to personal
mobile devices. For example, Rekimoto added tilt sensors
to a handheld to obtain context about the handling of the
device [10]. Similarly, we have explored integration of
orientation sensors in a handheld computer [14]. In this
line of work, the context obtained from sensors is used as
user interface extension. This is to be distinguished from
context-awareness in mobile computing which is focused
on using context to relate a mobile device to its
surrounding environment.
While handheld computers generally still remain
shielded from their surroundings, a stronger interest in
situating devices is pursued in many wearable computing
developments. A key motivation for wearable computers is
to support their users in improved and proactive ways on
the grounds of being permanently with the user. A
precondition is a suitable understanding of the user’s
situation, and in this context a range of projects have
investigated sensor integration to obtain information on
both user and environment. For example, cameras and
computer vision have been integrated with wearable
computers for visual context-awareness [16]. While there
has been some research into lower-cost vision techniques,
this still assumes a suitably powerful computing platform.
Beyond vision, the use of other sensors has been explored
in a range of wearable computing applications. For
instance, the Oregon wearable was equipped with sensors
for object presence in a collaborative field engineering
application [1], and in the Startlecam application bio-
sensors were employed to the end of recognizing extreme
user situations [7]. However, these are applications with
task focus, and sensor integration is not generalized for
wider applicability.
In wearable computing, two projects come close in
spirit to our work. Paradiso has investigated sensor
integration in footwear with a range of applications [8].
While the project was primarily concerned with enabling
shoes as an expressive user interface, this is still related to
our Mediacup work as it also augments a non-digital
artifact. In both expressive footwear and Mediacup the
approach is to obtain information from ordinary use:
expressive footwear generates information as the user
moves around, and likewise the Mediacup generates
information in the course of being used as an ordinary
coffee cup. In different ways close to our work is that of
Golding and Lesh, who investigated integration of diverse
sensors as alternative location technique for indoor
navigation [5]. Like we did in the TEA project, they
focused on integration of deliberately simple sensors. In
their method, multi-sensor data is associated with
locations, while in TEA it is associated with a more
general notion of context beyond location.

3. TEA - an add-on device for context-
awareness
The general motivation underlying the TEA project is
to make personal mobile devices smarter. The assumption
is that the more a device knows about its user, its
environment and the situations in which it is used the
better it can provide assistance. The objective of TEA is to
arrive at a generic solution for making devices smarter,
and the approach taken is to integrate awareness
technology both hardware and software in a self-
contained device conceived as plug-in for any personal
appliance which from a TEA perspective is called host.
The cornerstones of the TEA device concept are:
Integration of diverse sensors, assembled for
acquisition multi-sensor data independently of any
particular application.
Association of multi-sensor data with situations in
which the host device is used, for instance being in a
meeting.
Implementation of hardware, i.e. sensors and
processing environment, and software, i.e. methods
for computing situational context from sensor data, in
an embedded device
A specific objective underlying sensor integration is to
address the kind of context that can not be derived from
location information at all, for example situations that can
occur anywhere. While it seems obvious that there is
context that can not be inferred from location information,
most work in context-awareness has actually served to
show how that rich context can be derived from location
provided location semantics beyond specification of
position are available.
Another specific issue investigated in TEA is sensor
fusion. The aim is to derive more context from a group of
sensors than the sum of what can be derived from
individual sensors.
3.1. TEA architecture
TEA is based on a layered architecture for sensor-based
computation of context as illustrated in figure 1, with
separate layers for raw sensor data, for features extracted
from individual sensors (‘cues’), and for context derived
from cues.
The sensor layer is defined by an open array of sensors
including both environmental sensors for perception of the
real world and logical sensors for monitoring of conditions
in the virtual world, for instance logical state of the host
device. The data supplied by sensors can be very different,
ranging form slow sensors that supply scalars (e.g.
temperature sensor) to fast and complex sensors that
provide a large amount of more or less structured data (e.g.
a camera or a microphone); also the update time varies
from sensor to sensor.
The cue layer introduces cues as abstraction from raw
sensor data. Each cue is a feature extracted from the data
stream of a single sensor, and many diverse cues can be
derived from the same sensor. This abstraction from
sensors to cues is generic, i.e. independent of any specific
application. This process of preprocessing sensor data has
also been referred to as cooking sensors [5], and serves to
reduce the amount of data substantially before further
abstraction. Just as the architecture does not prescribe any
specific set of sensors, it also does not prescribe specific
methods for feature extraction in this layer. However, in
accordance with the philosophy of shifting complexity
from algorithms to architecture it is assumed that cue
calculation will be based on comparatively simple
methods. The calculation of cues from sensor values may
for instance be based on simple statistics over time (e.g.
average over the last second, standard deviation of the
signal, quartile distance, etc.) or on somewhat more
complex mappings and algorithms (e.g. calculation of the
main frequencies from a audio signal over the last second,
pattern of movement based on acceleration values). The
cue layer hides the sensor interfaces from the context layer
it serves, and instead provides a smaller and uniform
interface defined as set of cues describing the sensed
system environment. This way, the cue layer strictly
separates the sensor layer and context layer which means
context can be modeled in abstraction from sensor
technologies and properties of specific sensors. Separation
of sensors and cues also means that both sensors and
feature extraction methods can be developed and replaced
independently of each other.
The context layer introduces a set of contexts which are
abstractions of real world situations, each as function of
available cues. It is only at this level of abstraction, after
feature extraction and data reduction in the cue layer, that
information from different sensor is fused in the process of
calculating context. While cues are assumed to be generic,
context is considered to be more closely related to the host
device and the specific situations in which it is used.
Again, the architecture does not prescribe the methods for
calculation of context from cues; rule-based algorithms,
statistical methods and neural networks may for instance
be used. Conceptually, context is calculated from all
Sensors
s
1
s
2
c
11
c
12
...
c
21
c
22
...
...
f
1
f
2
Cue
Context
Figure 1. TEA is based on a layered architecture
for abstraction from raw sensor data to multi-
sensor-based context.

available cues. In a rule set however, cues known to be
irrelevant may simply be neglected, and in neural network
their weight would be reduced accordingly. The context
calculation, i.e. the reasoning about cues to derive context,
may be described explicitly, e.g. when cues are known to
be relevant indicators of a certain real world situation, or
implicitly in methods that learn context from example
data.
The context layer hides lower interfaces from
applications, which are based on the context interface. In
the application, context can then be associated with
reactive behaviour.
3.2. Initial exploration of the approach
To study the TEA approach, we have developed two
generations of prototype devices and used them for
exploration of multi-sensor data, and for a validation of
TEA as add-on device for mobile phones. In parallel to
development of the first prototype we have also conducted
scenario-based requirements analysis to investigate our
assumption that there is useful context for personal mobile
devices that can not be derived from location but from
multi-sensor input. In this analysis, a range of scenarios
were developed for both mobile phones and personal
digital assistants (PDA), and it was found that the potential
for context beyond location was higher in communication-
related scenarios than in typical PDA applications which
led us to focus further studies on the domain of mobile
telephony.
The TEA device was developed in two generations. The
first generation device was developed for exploration of a
wide range of sensors and their contribution to context-
awareness. It contained common sensors such as
microphone, light sensor and accelerometers but also
sensors for example for air pressure, certain gas
concentration and so on. With several implementations of
the device, large amounts of raw sensor data were
collected independently at different sites for further
analysis of multi-sensor fusion following two strategies:
Analysis of the contribution of a sensor or group of
sensors to perception of a given context, i.e. a
specific real-world situation: For this study a number
of situations that we considered relevant for personal
mobile devices were defined (e.g. user is walking,
user is in a conversation, other people are around, user
is driving a car, etc.). Then data was collected for each
of these situations, with independent data collection at
three different sites. The data was then subjected to
statistical analysis to determine for each sensor or
sensor group whether its inclusion increased the
probability of recognizing situations.
Analysis of clusters in collected multi-sensor data:
Here the strategy was to carry the device over a longer
period of time so it accompanies a user in different
situations. Over the whole period of time, raw sensor
data was recorded and to be later analyzed to identify
clusters corresponding to situations that occured
during recording time, e.g. situations such as user is
sitting at her desk, walking over to a colleague,
chatting, walking back, engaging in a phone
converstion and so on. This process was aimed at
identifying the sensors relevant to situations, and at
development of a clustering algorithm supporting
awareness of situations of interest.
3.3. Prototype implementation and validation
The initial exploration of sensors and their contribution
to awareness of typical real-world situations served to
inform development of the second generation device
optimized for smaller packaging, and shown in figure 2.
The device integrates two light sensors, two microphones,
a two-axis accelerometer, a skin conductance sensor and a
temperature sensor. The sensors are read by a micro-
controller, that also calculates the cues and in some
applications also the contexts. The system is designed to
minimize the energy consumption of the component. The
micro-controller (PIC16F877) has a number of analog an
digital inputs and communicates via serial line with the
host device. The calculation of cues and contexts is very
much restricted due to the limitations of the micro-
controller. Programs have to fit into 8K of EEProm, and
have only 200 Bytes of RAM available.
The feature extraction algorithms to generate the cues
have been designed to accomodate these limitations. Data
that has to be read with high speed such as audio is
directly analyzed and not stored. Typical cues for audio
that are calculated on the fly are the number of zero
crossing of the signal in a certain time (indicator of the
frequency) and number of direction changes of the signal
Figure 2. The current implementat
ion of the TEA
awareness device has about the size of a mobile
phone battery pack.

(together with the zero crossings this is a indicator of the
noise in the audio signal). For acceleration and light basic
statistical methods and an estimation of the first derivative
are calculated. Slowly changing values temperature and
skin conductance are not further processed in the cue
layer (the cue function is the identity). The contexts are
calculated based on rules that were extracted off-line from
data recorded with the sensor board in different situations.
The prototype is independent of nay specific host and
has been used in conjunction with a palmtop computer, a
wearable computer and mobile phones. Primarily however
the prototype is being applied in the area of mobile
telephony. State of the art mobile phones support so-called
profiles to group settings, such as notification mode, input
and output modality, and reaction to incoming messages
and calls. Users can define profiles for different situations
(e.g. home, meeting, car, etc.) and specify behavior desired
in those situations. The TEA device has been added to a
mobile phone to automate activation of such profiles
which otherwise have be activated manually by the user.
The approach was validated in an experiment, in which the
TEA device was used to control a small set of typical
profiles [13].
3.4. Application in mobile telephony
An interesting application domain for context-aware
mobile phones as enabled by TEA is the sharing of context
between caller and callee. For a caller, context may be
helpful for instance to assess whether it is a good time to
call (in fact, “is it a good time to call” is quite commonly
asked when a phone conversation is initiated), and for a
callee it may help to assess importance of an incoming call
(“is it important or can I phone back later” a common
question in accepting a call). To study context-enhanced
communication, we have implemented the WAP-based
application “context-call”. In this application, a call is
initiated as usually by entering the number of the callee.
The application however does not establish the call
straightaway but instead looks up the context of the callee
and provides this information to the caller. The caller is
then prompted to decide how to proceed for example
whether to use a voice service or a short message service.
A detailed discussion of the application is provided in
[15].
3.5. Discussion of TEA experience
Our experience gathered in the TEA project supports the
case for investigation of context beyond location, and for
fusion of diverse sensors as approach to obtain such
context. We have used the approach for obtaining strictly
location-independent context such as “in a meeting”, “in a
conversation”, “user is walking” which can not be derived
from location information. As for sensor fusion, our
analysis of collected multi-sensor data showed that with
our approach context can be derived beyond the sum of
what can be obtained from individual sensors. This initial
experience is valuable, however it is clearly not sufficient
to derive any methodology for systematic application of
sensor fusion for context-aware applications. However,
what we find generalizable is the layered approach to
perception. The two-step abstraction first from sensors to
cues and then from cues to context proved to be a suitable
strategy for the perception process as such, and in addition
it also supports architectural qualities such as modularity
and separation of concerns.
In TEA, extensive experience was gained with a wide
range of sensors and their integration. From this
experience we can derive some indication as to which
sensors are of particular interest for the overall objective
of capturing real world situation. We found that in
particular sensors for audio, movement and light provide
contributions to awareness in most settings while most
other sensors have rather specific applications in which
they are valuable. In addition we found that perception can
be improved by using not just diverse sensors but also
multiple sensors of the same kind, in particular
microphones and light sensors with different orientation.
More generally, it was found that placement substantially
influences the contribution of sensor to multi-sensor based
awareness. In some ways, this challenges the approach of
tightly packing sensors. In the context of augmenting
personal mobile devices, an alternative would be
disaggregation and distribution of sensors for instance on
the user’s body or clothing, assuming a body area network
for data collection.
Last not least, it should be noted that our experience
also extends to the exploration of practical applications
with commercial prospect such as the context call we
briefly discussed. The community is currently debating
what the killer application of context-awareness might be,
and based on our research we would suggest that if there is
a killer application it will be in the area of interpersonal
communication.
4. Mediacup embedding awareness
technology in everyday artifacts
The Mediacup project was conducted in parallel to
TEA, and while also investigating embedded awareness
technology it is motivated differently. TEA is about
making artifacts smarter, i.e. to improve the functionality
the artifact offers their user. In contrast, the Mediacup
project is about using artifacts to collect context
information transparently, i.e. without changing the
function and use of the artifact. The core idea is that by
embedding awareness technology in the everyday things
people use we can obtain context on everyday activity so
to speak at the source. This approach assumes a distributed
system in which some artifacts are augmented to collect
context information, while other artifacts are
computationally augmented to use such context.

Citations
More filters

Journal ArticleDOI
TL;DR: The University of Florida's Mobile and Pervasive Computing Laboratory is developing programmable pervasive spaces in which a smart space exists as both a runtime environment and a software library.
Abstract: Research groups in both academia and industry have developed prototype systems to demonstrate the benefits of pervasive computing in various application domains. Unfortunately, many first-generation pervasive computing systems lack the ability to evolve as new technologies emerge or as an application domain matures. To address this limitation, the University of Florida's Mobile and Pervasive Computing Laboratory is developing programmable pervasive spaces in which a smart space exists as both a runtime environment and a software library. Service discovery and gateway protocols automatically integrate system components using generic middleware that maintains a service definition for each sensor and actuator in the space. The Gator Tech Smart House in Gainesville, Florida, is the culmination of more than five years of research in pervasive and mobile computing. The project's goal is to create assistive environments such as homes that can sense themselves and their residents and enact mappings between the physical world and remote monitoring and intervention services.

912 citations


Additional excerpts

  • ...(3,2) (4,2) (5,2) (0,2) (1,2) (2,2) (6,2) (7,2) (8,2) (9,2)...

    [...]

  • ...(3,10) (4,10) (5,10) (3,0) (4,0) (5,0) (0,0) (1,0) (2,0) (6,0) (7,0) (8,0) (9,0)...

    [...]

  • ...(3,3) (4,3) (5,3) (0,3) (1,3) (2,3) (6,3) (7,3) (8,3) (9,3)...

    [...]

  • ...(3,4) (4,4) (5,4) (0,4) (1,4) (2,4) (6,4) (7,4) (8,4) (9,4)...

    [...]

  • ...(3,1) (4,1) (5,1) (0,1) (1,1) (2,1) (6,1) (7,1) (8,1) (9,1)...

    [...]


Proceedings ArticleDOI
30 Sep 2001
TL;DR: This work proposes context proximity for selective artefact communication, using the context of artefacts for matchmaking, and suggests to empower users with simple but effective means to impose the same context on a number of artefacts.
Abstract: Ubiquitous computing is associated with a vision of everything being connected to everything. However, for successful applications to emerge, it will not be the quantity but the quality and usefulness of connections that will matter. Our concern is how qualitative relations and more selective connections can be established between smart artefacts, and how users can retain control over artefact interconnection. We propose context proximity for selective artefact communication, using the context of artefacts for matchmaking. We further suggest to empower users with simple but effective means to impose the same context on a number of artefacts. To prove our point we have implemented Smart-Its Friends, small embedded devices that become connected when a user holds them together and shakes them.

570 citations


Cites methods from "Adding some smartness to devices an..."

  • ...In our earlier work we have explored applications enabled by artefact-based context acquisition and sharing [3]....

    [...]


Proceedings ArticleDOI
16 Mar 2003
TL;DR: This paper exploits the fact that the strength of the signals that a device will receive from different access points will vary with location, and builds a database of signal strength information for various locations, and uses this information to determine which location a given test data comes from.
Abstract: Wireless LANs are becoming increasingly popular today, particularly those based on IEEE 802.11b standard. We study the problem of determining the location of a mobile device, which is communicating through a WLAN. We exploit the fact that the strength of the signals that a device will receive from different access points will vary with location. We build a database of signal strength information for various locations, and use this information to determine which location a given test data comes from. The problem is complicated because RF signals are affected by the noise, interference, multi-path effect, and random movement in the environment. We find that in spite of this randomness, the signal information is sufficient to detect the position of mobile device with certain error margin.

245 citations


Cites background from "Adding some smartness to devices an..."

  • ...Detecting the location is one of the first step towards building context sensitive smart devices [2]....

    [...]


Journal ArticleDOI
24 Jan 2007
TL;DR: The results indicate that common contextual variations can lead to dramatic changes in behavior and that interactions between contextual factors are also important to consider.
Abstract: Many real world mobile device interactions occur in context-rich environments. However, the majority of empirical studies on mobile computing are conducted in static or idealized conditions, resulting in a deficit of understanding of how changes in context impact users’ abilities to perform effectively. This paper attempts to address the disconnect between the actual use and the evaluation of mobile devices by varying contextual conditions and recording changes in behavior. A study was performed to investigate the specific effects of changes in motion, lighting, and task type on user performance and workload. The results indicate that common contextual variations can lead to dramatic changes in behavior and that interactions between contextual factors are also important to consider.

186 citations


Proceedings ArticleDOI
Paul Lukowicz1, Holger Junker1, Mathias Stäger1, T. von Buren, Gerhard Tröster 
29 Sep 2002
TL;DR: A distributed, multi-sensor system architecture designed to provide a wearable computer with a wide range of complex context information that devotes particular attention to sensor placement, system partitioning as well as resource requirements given by the power consumption, computational intensity and communication overhead.
Abstract: This paper describes a distributed, multi-sensor system architecture designed to provide a wearable computer with a wide range of complex context information. Starting from an analysis of useful high level context information we present a top down design that focuses on the peculiarities of wearable applications. Thus, our design devotes particular attention to sensor placement, system partitioning as well as resource requirements given by the power consumption, computational intensity and communication overhead. We describe an implementation of our architecture and initial experimental results obtained with the system.

108 citations


Cites background from "Adding some smartness to devices an..."

  • ...Gellersen et al. propose to use relatively simple sensors as a basis for the derivation of complex context information [ 7 ,8]....

    [...]


References
More filters

Proceedings ArticleDOI
08 Dec 1994
TL;DR: This paper describes systems that examine and react to an individual's changing context, and describes four catagories of context-aware applications: proximate selection, automatic contextual reconfiguration, contextual information and commands, and contex-triggered actions.
Abstract: This paper describes systems that examine and react to an individual's changing context. Such systems can promote and mediate people's interactions with devices, computers, and other people, and they can help navigate unfamiliar places. We believe that a limited amount of information covering a person's proximate environment is most important for this form of computing since the interesting part of the world around us is what we can see, hear, and touch. In this paper we define context-aware computing, and describe four catagories of context-aware applications: proximate selection, automatic contextual reconfiguration, contextual information and commands, and contex-triggered actions. Instances of these application types have been prototyped on the PARCTAB, a wireless, palm-sized computer.

3,717 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: This work introduces the concept of context widgets that mediate betweent the environment and the application in the same way graphicalwidgets mediate between the user and the applications.
Abstract: Context-enabled applications are just emerging and promise richer interaction by taking environmental context into account. However, they are difficult to build due to their distributed nature and the use of unconventional sensors. The concepts of toolkits and widget libraries in graphical user interfaces has been tremendously successtil, allowing programmers to leverage off existing building blocks to build interactive systems more easily. We introduce the concept of context widgets that mediate between the environment and the application in the same way graphical widgets mediate between the user and the application. We illustrate the concept of context widgets with the beginnings of a widget library we have developed for sensing presence, identity and activity of people and things. We assess the success of our approach with two example context-enabled applications we have built and an existing application to which we have added context-sensing capabilities.

1,326 citations


Journal ArticleDOI
TL;DR: A working model for context is introduced, mechanisms to acquire context beyond location, and application of context-awareness in ultra-mobile computing are discussed and fusion of sensors for acquisition of information on more sophisticated contexts are explored.
Abstract: Context is a key issue in interaction between human and computer, describing the surrounding facts that add meaning In mobile computing location is usually used to approximate context and to implement context-aware applications We propose that ultra-mobile computing, characterized by devices that are operational and operated while on the move (eg PDAs, mobile phones, wearable computers), can significantly benefit from a wider notion of context To structure the field we introduce a working model for context, discuss mechanisms to acquire context beyond location, and application of context-awareness in ultra-mobile computing We investigate the utility of sensors for context-awareness and present two prototypical implementations — a light-sensitive display and an orientation-aware PDA interface The concept is then extended to a model for sensor fusion to enable more sophisticated context recognition Based on an implementation of the model an experiment is described and the feasibility of the approach is demonstrated Further, we explore fusion of sensors for acquisition of information on more sophisticated contexts

1,194 citations


"Adding some smartness to devices an..." refers methods in this paper

  • ...Similarly, we have explored integration of orientation sensors in a handheld computer [14]....

    [...]


Journal ArticleDOI
Andy Harter1, Andy Hopper1
TL;DR: The article describes the technology of a system for locating people and equipment and the design of a distributed system service supporting access to that information, and the application interfaces made possible by or that benefit from this facility.
Abstract: Distributed systems for locating people and equipment will be at the heart of tomorrow's active offices. Computer and communications systems continue to proliferate in the office and home. Systems are varied and complex, involving wireless networks and mobile computers. However, systems are underused because the choices of control mechanisms and application interfaces are too diverse. It is therefore pertinent to consider which mechanisms might allow the user to manipulate systems in simple and ubiquitous ways, and how computers can be made more aware of the facilities in their surroundings. Knowledge of the location of people and equipment within an organization is such a mechanism. Annotating a resource database with location information allows location-based heuristics for control and interaction to be constructed. This approach is particularly attractive because location techniques can be devised that are physically unobtrusive and do not rely on explicit user action. The article describes the technology of a system for locating people and equipment, and the design of a distributed system service supporting access to that information. The application interfaces made possible by or that benefit from this facility are presented. >

706 citations


"Adding some smartness to devices an..." refers background in this paper

  • ...Examples are name tags in the Active Badge system [6], and the palmsize ParcTab terminals [12], both augmented with infrared diodes that emit signals from which the transceiver infrastructure derives location....

    [...]


Proceedings ArticleDOI
27 Sep 1999
TL;DR: A layered real-time architecture for this kind of context-aware adaptation based on redundant collections of low-level sensors, which has shown that it is feasible to recognize contexts using sensors and that context information can be used to create new interaction metaphors.
Abstract: Mobile information appliances are increasingly used in numerous different situations and locations, setting new requirements to their interaction methods When the user's situation, place or activity changes, the functionality of the device should adapt to these changes In this work we propose a layered real-time architecture for this kind of context-aware adaptation based on redundant collections of low-level sensors Two kinds of sensors are distinguished: physical and logical sensors, which give cues from environment parameters and host information A prototype board that consists of eight sensors was built for experimentation The contexts are derived from cues using real-time recognition software, which was constructed after experiments with Kohonen's Self-Organizing Maps and its variants A personal digital assistant (PDA) and a mobile phone were used with the prototype to demonstrate situational awareness On the PDA font size and backlight were changed depending on the demonstrated contexts while in mobile phone the active user profile was changed The experiments have shown that it is feasible to recognize contexts using sensors and that context information can be used to create new interaction metaphors

633 citations


"Adding some smartness to devices an..." refers methods in this paper

  • ...The approach was validated in an experiment, in which the TEA device was used to control a small set of typical profiles [13]....

    [...]