scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Adding some smartness to devices and everyday things

TL;DR: This work discusses augmentation of mobile artifacts with diverse sets of sensors and perception techniques for awareness of context beyond location, and reports experience from two projects, one on augmenting mobile phones with awareness technologies, and the other on embedding of awareness technology in everyday non-digital artifacts.
Abstract: In mobile computing, context-awareness indicates the ability of a system to obtain and use information on aspects of the system environment. To implement context-awareness, mobile system components have to be augmented with the ability to capture aspects of their environment. Recent work has mostly considered location-awareness, and hence augmentation of mobile artifacts with locality. We discuss augmentation of mobile artifacts with diverse sets of sensors and perception techniques for awareness of context beyond location. We report experience from two projects, one on augmentation of mobile phones with awareness technologies, and the other on embedding of awareness technology in everyday non-digital artifacts.

Summary (4 min read)

1. Introduction

  • It is now widely acknowledged that some awareness of the context in which mobile systems are used can produce added value and foster innovation in many application domains.
  • Secondly, components may be equipped with explicit location sensors, i.e. receivers for specific location services, such as GPS.
  • And their focus in this paper is on augmentation of mobile system components for awareness of context beyond location.the authors.
  • Diverse sets of sensors and perception techniques are integrated to the end of shifting complexity in contextawareness from algorithmic level to architectural level.
  • In the subsequent sections, the authors will briefly discuss related work on sensor-augmented mobile artifacts, and then report experience first from the TEA project and secondly from Mediacup work.

3. TEA - an add-on device for contextawareness

  • The general motivation underlying the TEA project is to make personal mobile devices smarter.
  • The assumption is that the more a device knows about its user, its environment and the situations in which it is used the better it can provide assistance.
  • The cornerstones of the TEA device concept are: Integration of diverse sensors, assembled for acquisition multi-sensor data independently of any particular application.
  • Implementation of hardware, i.e. sensors and processing environment, and software, i.e. methods for computing situational context from sensor data, in an embedded device A specific objective underlying sensor integration is to address the kind of context that can not be derived from location information at all, for example situations that can occur anywhere.
  • The aim is to derive more context from a group of sensors than the sum of what can be derived from individual sensors.

3.1. TEA architecture

  • TEA is based on a layered architecture for sensor-based computation of context as illustrated in figure 1, with separate layers for raw sensor data, for features extracted from individual sensors (‘cues’), and for context derived from cues.
  • The data supplied by sensors can be very different, ranging form slow sensors that supply scalars (e.g. temperature sensor) to fast and complex sensors that provide a large amount of more or less structured data (e.g. a camera or a microphone); also the update time varies from sensor to sensor.
  • This way, the cue layer strictly separates the sensor layer and context layer which means context can be modeled in abstraction from sensor technologies and properties of specific sensors.
  • Again, the architecture does not prescribe the methods for calculation of context from cues; rule-based algorithms, statistical methods and neural networks may for instance be used.
  • The context calculation, i.e. the reasoning about cues to derive context, may be described explicitly, e.g. when cues are known to be relevant indicators of a certain real world situation, or implicitly in methods that learn context from example data.

3.2. Initial exploration of the approach

  • To study the TEA approach, the authors have developed two generations of prototype devices and used them for exploration of multi-sensor data, and for a validation of TEA as add-on device for mobile phones.
  • The TEA device was developed in two generations.
  • The first generation device was developed for exploration of a wide range of sensors and their contribution to contextawareness.
  • For this study a number of situations that the authors considered relevant for personal mobile devices were defined (e.g. user is walking, user is in a conversation, other people are around, user is driving a car, etc.).
  • The data was then subjected to statistical analysis to determine for each sensor or sensor group whether its inclusion increased the probability of recognizing situations.

3.3. Prototype implementation and validation

  • The initial exploration of sensors and their contribution to awareness of typical real-world situations served to inform development of the second generation device optimized for smaller packaging, and shown in figure 2.
  • The sensors are read by a microcontroller, that also calculates the cues and in some applications also the contexts.
  • Typical cues for audio that are calculated on the fly are the number of zero crossing of the signal in a certain time (indicator of the frequency) and number of direction changes of the signal (together with the zero crossings this is a indicator of the noise in the audio signal).
  • The prototype is independent of nay specific host and has been used in conjunction with a palmtop computer, a wearable computer and mobile phones.
  • The TEA device has been added to a mobile phone to automate activation of such profiles which otherwise have be activated manually by the user.

3.4. Application in mobile telephony

  • An interesting application domain for context-aware mobile phones as enabled by TEA is the sharing of context between caller and callee.
  • To study context-enhanced communication, the authors have implemented the WAP-based application “context-call”.
  • The application however does not establish the call straightaway but instead looks up the context of the callee and provides this information to the caller.

3.5. Discussion of TEA experience

  • The authors experience gathered in the TEA project supports the case for investigation of context beyond location, and for fusion of diverse sensors as approach to obtain such context.
  • The authors have used the approach for obtaining strictly location-independent context such as “in a meeting”, “in a conversation”, “user is walking” which can not be derived from location information.
  • This initial experience is valuable, however it is clearly not sufficient to derive any methodology for systematic application of sensor fusion for context-aware applications.
  • From this experience the authors can derive some indication as to which sensors are of particular interest for the overall objective of capturing real world situation.
  • In addition the authors found that perception can be improved by using not just diverse sensors but also multiple sensors of the same kind, in particular microphones and light sensors with different orientation.

4. Mediacup – embedding awareness technology in everyday artifacts

  • The Mediacup project was conducted in parallel to TEA, and while also investigating embedded awareness technology it is motivated differently.
  • TEA is about making artifacts smarter, i.e. to improve the functionality the artifact offers their user.
  • In contrast, the Mediacup project is about using artifacts to collect context information transparently, i.e. without changing the function and use of the artifact.
  • The core idea is that by embedding awareness technology in the everyday things people use the authors can obtain context on everyday activity so to speak at the source.
  • This approach assumes a distributed system in which some artifacts are augmented to collect context information, while other artifacts are computationally augmented to use such context.

4.1. Aware artifacts model

  • The context-awareness model investigated in the Mediacup project is based on the following concepts: Artifacts are augmented with an awareness of their own local context.
  • To this end artifacts are equipped with sensors but also with a processing environment and software for autonomous calculation of artifactspecific context from sensor data.
  • Artifacts broadcast their context in their local environment.
  • To this end aware artifacts are augmented with basic communication capabilities.
  • Any applications, appliances or information artifacts in the environment can use the locally available context, without further knowledge of the artifacts from which the context originate.

4.2. Mediacup – Awareness embedded in coffee cups

  • For exploration of the aware artifacts model the authors have augmented coffee cups representing non-digital everyday artifacts with awareness technology.
  • The Mediacups, as the authors call the augmented mugs, contain hardware and software for sensing, processing and communicating the state of the cup as context information.
  • Also, the adaptation speed of the sensor is very slow, and therefore it is read only every two seconds.
  • The transceivers are based on HP’s HSDL 1001 IrDA chip and have a footprint of about 1,5m².
  • They are connected through a CAN bus (car area network) and a gateway to the local ethernet, in which collected context is broadcast in UDP packets.

4.3. Experience from design and use

  • Like TEA, the Mediacup project served to gather extensive experience with sensor-based context- awareness.
  • The Mediacup provides substantial experience on different issues, i.e. on the embedding of awareness technology in ‘unpowered’ artifacts, on issues surrounding transparency of technology, and on a paradigm shift in use of sensors for context-awareness.
  • The used microcontroller runs with a reduced clock speed of only 1 MHz; this reduces the power consumption to below 2mA at 5.5V in processing mode.
  • First fitting in a battery that runs for the live time of the cup or second recharging the cup with no additional attention of the user.
  • Another example, that came up with use experience with an early battery-powered prototype, was that power provision needs to be transparent.

5. Discussion and conclusion

  • In the TEA and Mediacup projects the authors have gathered substantial experience with sensor-based contextawareness and embedding of awareness technology in mobile artifacts.
  • The authors have gained important insights into sensor fusion for awareness of situational context, into architectural issues, into embedded design of awareness technology, and into a new perspective on context-enabled environments and applications.
  • The authors work to date was not specifically focussed on architectural issues.
  • However their experience highlights substantial challenges for perception techniques to perform in low-end computing environment.
  • The aware artifacts model is a first exploration in this direction, studying a shift from context-aware applications with sensor periphery to dynamic systems of specialized appliances and artifacts, some of which are augmented to capture context while others are augmented to use context.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Adding Some Smartness to Devices and Everyday Things
Hans-W. Gellersen, Albrecht Schmidt and Michael Beigl
TecO, University of Karlsruhe
Vincenz-Prießnitz-Str. 1, 76131 Karlsruhe, GERMANY
Phone +49 (721) 6902-49
{hwg | albrecht | michael}@teco.edu
Abstract
In mobile computing, context-awareness indicates the
ability of a system to obtain and use information on
aspects of the system environment. To implement context-
awareness, mobile system components have to be
augmented with the ability to capture aspects of their
environment. Recent work has mostly considered location-
awareness, and hence augmentation of mobile artifacts
with locality. In this paper we discuss augmentation of
mobile artifacts with diverse sets of sensors and
perception techniques for awareness of context beyond
location. We report experience from two projects, one on
augmentation of mobile phones with awareness
technologies, and the other on embedding of awareness
technology in everyday non-digital artifacts.
1. Introduction
It is now widely acknowledged that some awareness of
the context in which mobile systems are used can produce
added value and foster innovation in many application
domains. In mobile computing, the notion of context is
generally used in reference to aspects of the environment
in which a mobile system operates and to which the
system might adapt or respond with appropriate behavior.
While context is an open-ended concept, it is commonly
associated with straightforward aspects in mobile system
environments such as location of users, whereabouts of
system components, local availability of resources and
such like.
To facilitate awareness of context in mobile systems,
some system components have to be augmented with the
ability to capture aspects of the system environment, by
way of sensing or communicating. Often this is just one
component, for example a personal mobile device for
context-aware application access; in other systems this
may be many components, for example mobile physical
objects with location tags to assert overall system context.
Either way, each piece of context that enters a distributed
mobile system does so through an appropriately
augmented system component. Our concern in this paper
is how system components can be augmented
appropriately, i.e. how context-awareness can be added to
mobile devices and artifacts. The research we report is
based on a device-centric view, in which context is
primarily associated with a device. For our discussion it is
secondary, that context may also be associated with the
user of a mobile device, or with applications that may run
on the device or elsewhere in a distributed mobile system.
Most of the context-aware mobile systems discussed to
date consider location as context, and from a device-
centric perspective they are based on adding location-
awareness to one or many of their system components.
Three general approaches can be distinguished. First there
are systems in which components utilize the mobile
communications infrastructure to obtain location
information, for example the cell-of-origin in cell-based
communications. For example, the GUIDE system for
tourists in Lancaster employs mobile computers that
derive their location from a WaveLAN network [3].
Secondly, components may be equipped with explicit
location sensors, i.e. receivers for specific location
services, such as GPS. For example, the stick-e-note
system for context-aware information access in fieldwork
is based on palmtops augmented with GPS receivers [9].
Thirdly, components may be augmented in ways that
allows surrounding infrastructure to assert their location.
In this case, components strictly speaking have no
awareness themselves but it is their augmentation that
enables awareness. Examples are name tags in the Active
Badge system [6], and the palmsize ParcTab terminals
[12], both augmented with infrared diodes that emit
signals from which the transceiver infrastructure derives
location.
Location is a rich concept, and often it is not the
location as such but also information associated with
locations that is exploited in location-aware mobile
systems. However we would argue that there is more to
context than we can capture through location, and our
focus in this paper is on augmentation of mobile system
components for awareness of context beyond location.
More specifically, we investigate the use of diverse sets of
sensors in mobile system components for context-
awareness. We report experience from two research
projects on sensor-based context-awareness, TEA and
Mediacup. The TEA project investigates Technologies for
Enabling Awareness and their application in mobile

telephony [13]. The Mediacup project studies capture and
communication of context in everyday environments [2].
The novel issues investigated in these projects are the
integration of diverse sensors and perception techniques,
and the embedding of autonomous awareness in mobile
artifacts.
Diverse sets of sensors and perception techniques are
integrated to the end of shifting complexity in context-
awareness from algorithmic level to architectural level.
This is done by considering deliberately simple sensors
and feature extraction methods as opposed to expensive
hardware and algorithms. Advanced context-awareness is
then achieved through fusion of information obtained from
diverse sensors, employing suitable architectures. The
approach somewhat contrasts for example vision-based
approaches that tend to be compute-intensive, and is
geared toward implementation with embedded
technologies.
The second issue highlighted in the work we report is
the embedding of autonomous awareness in mobile
artifacts. It is straightforward to add awareness technology
sensors and perception algorithms to general purpose
computing platforms such as laptops, personal digital
assistants and wearable computers. Both the TEA and the
Mediacup project however investigate the adding of
awareness technology to artifacts that do not provide any
platform ready for extension with hardware and software.
In the case of TEA, the artifact considered is a mobile
phone, which is based on digital technology but still self-
contained and not open for extension. In the Mediacup
project the challenge is taken further by considering an
ordinary coffee cup, representing everyday artifacts. In
both projects, artifacts have been augmented and studied
in test environments.
In the subsequent sections, we will briefly discuss
related work on sensor-augmented mobile artifacts, and
then report experience first from the TEA project and
secondly from Mediacup work. This will be followed by
discussion that sums up our experience with adding
context-awareness to mobile artifacts, also pointing out
issues and directions for further research.
2. Related work
In a wide range of projects mobile artifacts have been
augmented to enable awareness of their location. While
three general approaches can be distinguished as discussed
in the introduction, artifacts fall actually into two groups.
First artifacts that have general-purpose computing
platforms ranging from smallest-scale, consider for
instance ParcTabs, to high-end wearable PCs. Secondly
artifacts explicitly designed for being located such as the
Active Badge infrared sender, and the Active Bat
ultrasound emitter. Our work in TEA and Mediacup in
contrast is concerned with augmenting artifacts that are
neither general-purpose computing platforms nor non-
functional beyond support of locality.
In handheld computing, there is some related work on
adding sensor technologies beyond location to personal
mobile devices. For example, Rekimoto added tilt sensors
to a handheld to obtain context about the handling of the
device [10]. Similarly, we have explored integration of
orientation sensors in a handheld computer [14]. In this
line of work, the context obtained from sensors is used as
user interface extension. This is to be distinguished from
context-awareness in mobile computing which is focused
on using context to relate a mobile device to its
surrounding environment.
While handheld computers generally still remain
shielded from their surroundings, a stronger interest in
situating devices is pursued in many wearable computing
developments. A key motivation for wearable computers is
to support their users in improved and proactive ways on
the grounds of being permanently with the user. A
precondition is a suitable understanding of the user’s
situation, and in this context a range of projects have
investigated sensor integration to obtain information on
both user and environment. For example, cameras and
computer vision have been integrated with wearable
computers for visual context-awareness [16]. While there
has been some research into lower-cost vision techniques,
this still assumes a suitably powerful computing platform.
Beyond vision, the use of other sensors has been explored
in a range of wearable computing applications. For
instance, the Oregon wearable was equipped with sensors
for object presence in a collaborative field engineering
application [1], and in the Startlecam application bio-
sensors were employed to the end of recognizing extreme
user situations [7]. However, these are applications with
task focus, and sensor integration is not generalized for
wider applicability.
In wearable computing, two projects come close in
spirit to our work. Paradiso has investigated sensor
integration in footwear with a range of applications [8].
While the project was primarily concerned with enabling
shoes as an expressive user interface, this is still related to
our Mediacup work as it also augments a non-digital
artifact. In both expressive footwear and Mediacup the
approach is to obtain information from ordinary use:
expressive footwear generates information as the user
moves around, and likewise the Mediacup generates
information in the course of being used as an ordinary
coffee cup. In different ways close to our work is that of
Golding and Lesh, who investigated integration of diverse
sensors as alternative location technique for indoor
navigation [5]. Like we did in the TEA project, they
focused on integration of deliberately simple sensors. In
their method, multi-sensor data is associated with
locations, while in TEA it is associated with a more
general notion of context beyond location.

3. TEA - an add-on device for context-
awareness
The general motivation underlying the TEA project is
to make personal mobile devices smarter. The assumption
is that the more a device knows about its user, its
environment and the situations in which it is used the
better it can provide assistance. The objective of TEA is to
arrive at a generic solution for making devices smarter,
and the approach taken is to integrate awareness
technology both hardware and software in a self-
contained device conceived as plug-in for any personal
appliance which from a TEA perspective is called host.
The cornerstones of the TEA device concept are:
Integration of diverse sensors, assembled for
acquisition multi-sensor data independently of any
particular application.
Association of multi-sensor data with situations in
which the host device is used, for instance being in a
meeting.
Implementation of hardware, i.e. sensors and
processing environment, and software, i.e. methods
for computing situational context from sensor data, in
an embedded device
A specific objective underlying sensor integration is to
address the kind of context that can not be derived from
location information at all, for example situations that can
occur anywhere. While it seems obvious that there is
context that can not be inferred from location information,
most work in context-awareness has actually served to
show how that rich context can be derived from location
provided location semantics beyond specification of
position are available.
Another specific issue investigated in TEA is sensor
fusion. The aim is to derive more context from a group of
sensors than the sum of what can be derived from
individual sensors.
3.1. TEA architecture
TEA is based on a layered architecture for sensor-based
computation of context as illustrated in figure 1, with
separate layers for raw sensor data, for features extracted
from individual sensors (‘cues’), and for context derived
from cues.
The sensor layer is defined by an open array of sensors
including both environmental sensors for perception of the
real world and logical sensors for monitoring of conditions
in the virtual world, for instance logical state of the host
device. The data supplied by sensors can be very different,
ranging form slow sensors that supply scalars (e.g.
temperature sensor) to fast and complex sensors that
provide a large amount of more or less structured data (e.g.
a camera or a microphone); also the update time varies
from sensor to sensor.
The cue layer introduces cues as abstraction from raw
sensor data. Each cue is a feature extracted from the data
stream of a single sensor, and many diverse cues can be
derived from the same sensor. This abstraction from
sensors to cues is generic, i.e. independent of any specific
application. This process of preprocessing sensor data has
also been referred to as cooking sensors [5], and serves to
reduce the amount of data substantially before further
abstraction. Just as the architecture does not prescribe any
specific set of sensors, it also does not prescribe specific
methods for feature extraction in this layer. However, in
accordance with the philosophy of shifting complexity
from algorithms to architecture it is assumed that cue
calculation will be based on comparatively simple
methods. The calculation of cues from sensor values may
for instance be based on simple statistics over time (e.g.
average over the last second, standard deviation of the
signal, quartile distance, etc.) or on somewhat more
complex mappings and algorithms (e.g. calculation of the
main frequencies from a audio signal over the last second,
pattern of movement based on acceleration values). The
cue layer hides the sensor interfaces from the context layer
it serves, and instead provides a smaller and uniform
interface defined as set of cues describing the sensed
system environment. This way, the cue layer strictly
separates the sensor layer and context layer which means
context can be modeled in abstraction from sensor
technologies and properties of specific sensors. Separation
of sensors and cues also means that both sensors and
feature extraction methods can be developed and replaced
independently of each other.
The context layer introduces a set of contexts which are
abstractions of real world situations, each as function of
available cues. It is only at this level of abstraction, after
feature extraction and data reduction in the cue layer, that
information from different sensor is fused in the process of
calculating context. While cues are assumed to be generic,
context is considered to be more closely related to the host
device and the specific situations in which it is used.
Again, the architecture does not prescribe the methods for
calculation of context from cues; rule-based algorithms,
statistical methods and neural networks may for instance
be used. Conceptually, context is calculated from all
Sensors
s
1
s
2
c
11
c
12
...
c
21
c
22
...
...
f
1
f
2
Cue
Context
Figure 1. TEA is based on a layered architecture
for abstraction from raw sensor data to multi-
sensor-based context.

available cues. In a rule set however, cues known to be
irrelevant may simply be neglected, and in neural network
their weight would be reduced accordingly. The context
calculation, i.e. the reasoning about cues to derive context,
may be described explicitly, e.g. when cues are known to
be relevant indicators of a certain real world situation, or
implicitly in methods that learn context from example
data.
The context layer hides lower interfaces from
applications, which are based on the context interface. In
the application, context can then be associated with
reactive behaviour.
3.2. Initial exploration of the approach
To study the TEA approach, we have developed two
generations of prototype devices and used them for
exploration of multi-sensor data, and for a validation of
TEA as add-on device for mobile phones. In parallel to
development of the first prototype we have also conducted
scenario-based requirements analysis to investigate our
assumption that there is useful context for personal mobile
devices that can not be derived from location but from
multi-sensor input. In this analysis, a range of scenarios
were developed for both mobile phones and personal
digital assistants (PDA), and it was found that the potential
for context beyond location was higher in communication-
related scenarios than in typical PDA applications which
led us to focus further studies on the domain of mobile
telephony.
The TEA device was developed in two generations. The
first generation device was developed for exploration of a
wide range of sensors and their contribution to context-
awareness. It contained common sensors such as
microphone, light sensor and accelerometers but also
sensors for example for air pressure, certain gas
concentration and so on. With several implementations of
the device, large amounts of raw sensor data were
collected independently at different sites for further
analysis of multi-sensor fusion following two strategies:
Analysis of the contribution of a sensor or group of
sensors to perception of a given context, i.e. a
specific real-world situation: For this study a number
of situations that we considered relevant for personal
mobile devices were defined (e.g. user is walking,
user is in a conversation, other people are around, user
is driving a car, etc.). Then data was collected for each
of these situations, with independent data collection at
three different sites. The data was then subjected to
statistical analysis to determine for each sensor or
sensor group whether its inclusion increased the
probability of recognizing situations.
Analysis of clusters in collected multi-sensor data:
Here the strategy was to carry the device over a longer
period of time so it accompanies a user in different
situations. Over the whole period of time, raw sensor
data was recorded and to be later analyzed to identify
clusters corresponding to situations that occured
during recording time, e.g. situations such as user is
sitting at her desk, walking over to a colleague,
chatting, walking back, engaging in a phone
converstion and so on. This process was aimed at
identifying the sensors relevant to situations, and at
development of a clustering algorithm supporting
awareness of situations of interest.
3.3. Prototype implementation and validation
The initial exploration of sensors and their contribution
to awareness of typical real-world situations served to
inform development of the second generation device
optimized for smaller packaging, and shown in figure 2.
The device integrates two light sensors, two microphones,
a two-axis accelerometer, a skin conductance sensor and a
temperature sensor. The sensors are read by a micro-
controller, that also calculates the cues and in some
applications also the contexts. The system is designed to
minimize the energy consumption of the component. The
micro-controller (PIC16F877) has a number of analog an
digital inputs and communicates via serial line with the
host device. The calculation of cues and contexts is very
much restricted due to the limitations of the micro-
controller. Programs have to fit into 8K of EEProm, and
have only 200 Bytes of RAM available.
The feature extraction algorithms to generate the cues
have been designed to accomodate these limitations. Data
that has to be read with high speed such as audio is
directly analyzed and not stored. Typical cues for audio
that are calculated on the fly are the number of zero
crossing of the signal in a certain time (indicator of the
frequency) and number of direction changes of the signal
Figure 2. The current implementat
ion of the TEA
awareness device has about the size of a mobile
phone battery pack.

(together with the zero crossings this is a indicator of the
noise in the audio signal). For acceleration and light basic
statistical methods and an estimation of the first derivative
are calculated. Slowly changing values temperature and
skin conductance are not further processed in the cue
layer (the cue function is the identity). The contexts are
calculated based on rules that were extracted off-line from
data recorded with the sensor board in different situations.
The prototype is independent of nay specific host and
has been used in conjunction with a palmtop computer, a
wearable computer and mobile phones. Primarily however
the prototype is being applied in the area of mobile
telephony. State of the art mobile phones support so-called
profiles to group settings, such as notification mode, input
and output modality, and reaction to incoming messages
and calls. Users can define profiles for different situations
(e.g. home, meeting, car, etc.) and specify behavior desired
in those situations. The TEA device has been added to a
mobile phone to automate activation of such profiles
which otherwise have be activated manually by the user.
The approach was validated in an experiment, in which the
TEA device was used to control a small set of typical
profiles [13].
3.4. Application in mobile telephony
An interesting application domain for context-aware
mobile phones as enabled by TEA is the sharing of context
between caller and callee. For a caller, context may be
helpful for instance to assess whether it is a good time to
call (in fact, “is it a good time to call” is quite commonly
asked when a phone conversation is initiated), and for a
callee it may help to assess importance of an incoming call
(“is it important or can I phone back later” a common
question in accepting a call). To study context-enhanced
communication, we have implemented the WAP-based
application “context-call”. In this application, a call is
initiated as usually by entering the number of the callee.
The application however does not establish the call
straightaway but instead looks up the context of the callee
and provides this information to the caller. The caller is
then prompted to decide how to proceed for example
whether to use a voice service or a short message service.
A detailed discussion of the application is provided in
[15].
3.5. Discussion of TEA experience
Our experience gathered in the TEA project supports the
case for investigation of context beyond location, and for
fusion of diverse sensors as approach to obtain such
context. We have used the approach for obtaining strictly
location-independent context such as “in a meeting”, “in a
conversation”, “user is walking” which can not be derived
from location information. As for sensor fusion, our
analysis of collected multi-sensor data showed that with
our approach context can be derived beyond the sum of
what can be obtained from individual sensors. This initial
experience is valuable, however it is clearly not sufficient
to derive any methodology for systematic application of
sensor fusion for context-aware applications. However,
what we find generalizable is the layered approach to
perception. The two-step abstraction first from sensors to
cues and then from cues to context proved to be a suitable
strategy for the perception process as such, and in addition
it also supports architectural qualities such as modularity
and separation of concerns.
In TEA, extensive experience was gained with a wide
range of sensors and their integration. From this
experience we can derive some indication as to which
sensors are of particular interest for the overall objective
of capturing real world situation. We found that in
particular sensors for audio, movement and light provide
contributions to awareness in most settings while most
other sensors have rather specific applications in which
they are valuable. In addition we found that perception can
be improved by using not just diverse sensors but also
multiple sensors of the same kind, in particular
microphones and light sensors with different orientation.
More generally, it was found that placement substantially
influences the contribution of sensor to multi-sensor based
awareness. In some ways, this challenges the approach of
tightly packing sensors. In the context of augmenting
personal mobile devices, an alternative would be
disaggregation and distribution of sensors for instance on
the user’s body or clothing, assuming a body area network
for data collection.
Last not least, it should be noted that our experience
also extends to the exploration of practical applications
with commercial prospect such as the context call we
briefly discussed. The community is currently debating
what the killer application of context-awareness might be,
and based on our research we would suggest that if there is
a killer application it will be in the area of interpersonal
communication.
4. Mediacup embedding awareness
technology in everyday artifacts
The Mediacup project was conducted in parallel to
TEA, and while also investigating embedded awareness
technology it is motivated differently. TEA is about
making artifacts smarter, i.e. to improve the functionality
the artifact offers their user. In contrast, the Mediacup
project is about using artifacts to collect context
information transparently, i.e. without changing the
function and use of the artifact. The core idea is that by
embedding awareness technology in the everyday things
people use we can obtain context on everyday activity so
to speak at the source. This approach assumes a distributed
system in which some artifacts are augmented to collect
context information, while other artifacts are
computationally augmented to use such context.

Citations
More filters
Dissertation
01 Jan 2017
TL;DR: In this paper, the authors explain the contribution of the Principal Agent Theory, Transaction Cost Theory and Theory of Incomplete Contract for the design of Information Technology (IT) due diligence in mergers and acquisitions (M&A) to support acquisition decision effectively from buyer's perspective.
Abstract: This thesis explains the contribution of the Principal Agent Theory, Transaction Cost Theory and Theory of Incomplete Contract for the design of Information Technology (IT) due diligence in mergers and acquisitions (M&A) to support acquisition decision effectively from buyer`s perspective Effectively, means a framework with substantial contribution for evaluation of seller`s information technology form buyer`s perspective before acquisition contract closing Study Design/Methodology – A precise state of current literature is given and research objectives and questions have been derived The study of this thesis applies a research design with selection of the experts (n=11) The selection followed the criteria of significant experience in evaluation of IT as M&A consultants or in a executive role as Chief Information Officer (CIO) with profound IT integration experience The expert interview results have been validated by the respondents Findings - The results of this research, provides a precise framework for the evaluation of IT and has a significant function for the buyer in M&A This thesis contributes to the rare theoretical founded literature for the subject of IT DD in M&A

5 citations

Book ChapterDOI
01 Jan 2005
TL;DR: Ungeplante Verzogerungen in der Flugzeugwartung verursachen hohe Folgekosten Ineffizientes Werkzeugmanagement ist eine mogliche Ursache fur derartige Verzoglungen Hohe Sicherheitsbestimmungen verlangen regelmasige Kontrollen des Werkzugbestands, um zu verhindern, dass Mechaniker werkzeuge versehentlich in der Maschine vergessen Auch die An
Abstract: Ungeplante Verzogerungen in der Flugzeugwartung verursachen hohe Folgekosten Ineffizientes Werkzeugmanagement ist eine mogliche Ursache fur derartige Verzogerungen Hohe Sicherheitsbestimmungen verlangen regelmasige Kontrollen des Werkzeugbestands, um zu verhindern, dass Mechaniker Werkzeuge versehentlich in der Maschine vergessen Auch die Anforderungen an die Funktionsfahigkeit der Werkzeuge sind hoch, und Werkzeuge, die Mechaniker gemeinsam nutzen, mussen diese haufig vor dem Gebrauch suchen

5 citations

Proceedings ArticleDOI
Matija Franklin, David A. Lagnado, Chulhong Min1, Akhil Mathur1, Fahim Kawsar 
21 Sep 2021
TL;DR: In this paper, a guidance system for dementia patients is proposed to assist dementia patients with daily living and social connectedness by modeling activity and intention of dementia patients and providing contextual memory cues.
Abstract: Globally around 50 million people are currently living with dementia, and there are nearly 10 million new cases every year. The decline of memory and, with it, lack of self-confidence and continuous confusion have a devastating effect on people living with this disease. Dementia patients even struggle to accomplish mundane chores and require assistance for daily living and social connectedness. Over the past decade, we have seen remarkable growth in wearable technologies to manage our health and wellbeing and improve our awareness and social connectedness. However, we have to ask why wearables are not addressing this fundamental challenge of memory augmentation that threatens our society? Some limited existing work on cognitive wearables for dementia has focused on using images via camera-based life-logging technology. Instead, in this paper, we argue that earable - by virtue of its unique placement, rich sensing modalities, and acoustic feedback capabilities, uncovers new opportunities to augment human cognition to address this pressing need to assist dementia patients. To this end, we delve into fundamental principles of cognitive neuroscience to understand what constitutes memory disorder and its symptoms concerning errors in everyday activities. Building on this, we discuss the benefits of earables (in conjunction with smart objects) in modelling activity and intention of dementia patients and providing contextual memory cues. We put forward a guidance system to assist dementia patients with daily living and social connectedness.

5 citations

Dissertation
01 Jan 2013
TL;DR: It is demonstrated that increasing feelings of Social Presence can have a longer-term impact on LDDRs through increasing their feelings of Closeness towards one another, suggesting that SP is suitable concept to try and support through the design of communication technologies.
Abstract: This thesis investigates the design and use of communication technologies to support long distance dating relationships (LDDRs) We focus on using co-located behaviours that hold special relational meaning as the metaphor behind the design of devices to mediate between separated partners Social Presence is used as the main theoretical construct through which support for LDDRs is addressed Social Presence is a phenomenological concept which refers to “the degree of salience of the other person in the interaction and the consequent salience of the interpersonal relationship” [Short et al, 1976, p 65] An additional concept, Closeness, is also brought in to the design problem to account for the supportive role of communication technologies between moments of synchronous contact This thesis proposes three main arguments The first is that individual acts of communication, through feelings of Social Presence, have an impact on a couple’s feelings of Closeness towards one another We explore possible connections between Social Presence and Closeness through a diary study The results of the diary study also establish that the selection of communication media impacts feelings of Social Presence Our second argument is that a number of design facets, explored throughout the thesis, could enhance the design of communication technologies for LDDRs by increasing feelings of Social Presence An analysis of current literature informs the development of seven prototype devices based on hand-holding, hugging, sharing notes and pillow talk Two interview studies explore people’s reactions to these devices The findings from these studies are integrated into a design space which describes some of the design decisions that should be considered when creating behaviour-based devices which aim to support LDDRs Our third argument is that devices based on co-located behaviours support LDDRs through engendering high levels of SP This is investigated through five case studies using the devices we previously developed, showing that three of our devices are associated with particularly high levels of SP They also provide insights about the design space facets, as realised in the devices, through using the devices within couples’ existing communication routines The thesis concludes with a discussion of how the results of these studies are of relevance to researchers interested in supporting long distance dating relationships Our investigation into Social Presence provides two main contributions; firstly it offers an understanding of how various factors (including relationship type and distance) affect feelings of SP Secondly, it demonstrates that increasing feelings of Social Presence can have a longer-term impact on LDDRs through increasing their feelings of Closeness towards one another This suggests that SP is suitable concept to try and support through the design of communication technologies In addition to informing our discussion of our design space, the case studies within this thesis highlight that devices based on co-located behaviours can help support LDDRs Given the dearth of devices based on this metaphor, we suggest that other researchers may be interested in extending these findings by exploring other behavioural metaphors The design space proposed within this thesis offers two main contributions Firstly, designers can use the design space to foster innovation when creating new designs Design spaces result in a descriptive and exploratory tool for designers creating new innovations Secondly, the comprehensive consideration of the various dimensions, especially regarding our consideration of existing communication technologies, provides researchers with a novel design-centric view over the state of the art

5 citations

Proceedings ArticleDOI
21 Oct 2008
TL;DR: In this paper, Wu et al. examined the U-Readiness efforts in China from a technological perspective and discussed non-technology readiness from social implications, market foundation and government police perspectives.
Abstract: Ubiquitous Network Society (UNS) is viewed as the next general ICT development destination by many technologically advanced countries. Succeeding countries such as China are now assessing how to embrace the advent of ubiquitous era. This paper examines this issue from Chinapsilas technological perspective. Characteristic of UNS is identified and reviewed from technology acceptation and adoption perspective. We study technology framework of UNS is from three perspectives: Wireless Communication Network, Perception Technology and Ubiquitous Computing. The underlying objective of this paper is to understand U-Readiness efforts in China from a technological perspective. Non-technology readiness are discussed from social implications, market foundation and government police perspectives.

5 citations

References
More filters
Proceedings ArticleDOI
08 Dec 1994
TL;DR: This paper describes systems that examine and react to an individual's changing context, and describes four catagories of context-aware applications: proximate selection, automatic contextual reconfiguration, contextual information and commands, and contex-triggered actions.
Abstract: This paper describes systems that examine and react to an individual's changing context. Such systems can promote and mediate people's interactions with devices, computers, and other people, and they can help navigate unfamiliar places. We believe that a limited amount of information covering a person's proximate environment is most important for this form of computing since the interesting part of the world around us is what we can see, hear, and touch. In this paper we define context-aware computing, and describe four catagories of context-aware applications: proximate selection, automatic contextual reconfiguration, contextual information and commands, and contex-triggered actions. Instances of these application types have been prototyped on the PARCTAB, a wireless, palm-sized computer.

3,802 citations

Proceedings ArticleDOI
01 May 1999
TL;DR: This work introduces the concept of context widgets that mediate betweent the environment and the application in the same way graphicalwidgets mediate between the user and the applications.
Abstract: Context-enabled applications are just emerging and promise richer interaction by taking environmental context into account. However, they are difficult to build due to their distributed nature and the use of unconventional sensors. The concepts of toolkits and widget libraries in graphical user interfaces has been tremendously successtil, allowing programmers to leverage off existing building blocks to build interactive systems more easily. We introduce the concept of context widgets that mediate between the environment and the application in the same way graphical widgets mediate between the user and the application. We illustrate the concept of context widgets with the beginnings of a widget library we have developed for sensing presence, identity and activity of people and things. We assess the success of our approach with two example context-enabled applications we have built and an existing application to which we have added context-sensing capabilities.

1,337 citations

Journal ArticleDOI
TL;DR: A working model for context is introduced, mechanisms to acquire context beyond location, and application of context-awareness in ultra-mobile computing are discussed and fusion of sensors for acquisition of information on more sophisticated contexts are explored.

1,222 citations


"Adding some smartness to devices an..." refers methods in this paper

  • ...Similarly, we have explored integration of orientation sensors in a handheld computer [14]....

    [...]

Journal ArticleDOI
Andy Harter1, Andy Hopper1
TL;DR: The article describes the technology of a system for locating people and equipment and the design of a distributed system service supporting access to that information, and the application interfaces made possible by or that benefit from this facility.
Abstract: Distributed systems for locating people and equipment will be at the heart of tomorrow's active offices. Computer and communications systems continue to proliferate in the office and home. Systems are varied and complex, involving wireless networks and mobile computers. However, systems are underused because the choices of control mechanisms and application interfaces are too diverse. It is therefore pertinent to consider which mechanisms might allow the user to manipulate systems in simple and ubiquitous ways, and how computers can be made more aware of the facilities in their surroundings. Knowledge of the location of people and equipment within an organization is such a mechanism. Annotating a resource database with location information allows location-based heuristics for control and interaction to be constructed. This approach is particularly attractive because location techniques can be devised that are physically unobtrusive and do not rely on explicit user action. The article describes the technology of a system for locating people and equipment, and the design of a distributed system service supporting access to that information. The application interfaces made possible by or that benefit from this facility are presented. >

710 citations


"Adding some smartness to devices an..." refers background in this paper

  • ...Examples are name tags in the Active Badge system [6], and the palmsize ParcTab terminals [12], both augmented with infrared diodes that emit signals from which the transceiver infrastructure derives location....

    [...]

Proceedings ArticleDOI
27 Sep 1999
TL;DR: A layered real-time architecture for this kind of context-aware adaptation based on redundant collections of low-level sensors, which has shown that it is feasible to recognize contexts using sensors and that context information can be used to create new interaction metaphors.
Abstract: Mobile information appliances are increasingly used in numerous different situations and locations, setting new requirements to their interaction methods When the user's situation, place or activity changes, the functionality of the device should adapt to these changes In this work we propose a layered real-time architecture for this kind of context-aware adaptation based on redundant collections of low-level sensors Two kinds of sensors are distinguished: physical and logical sensors, which give cues from environment parameters and host information A prototype board that consists of eight sensors was built for experimentation The contexts are derived from cues using real-time recognition software, which was constructed after experiments with Kohonen's Self-Organizing Maps and its variants A personal digital assistant (PDA) and a mobile phone were used with the prototype to demonstrate situational awareness On the PDA font size and backlight were changed depending on the demonstrated contexts while in mobile phone the active user profile was changed The experiments have shown that it is feasible to recognize contexts using sensors and that context information can be used to create new interaction metaphors

634 citations


"Adding some smartness to devices an..." refers methods in this paper

  • ...The approach was validated in an experiment, in which the TEA device was used to control a small set of typical profiles [13]....

    [...]