scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A context-aware decision engine for content adaptation

01 Jul 2002-IEEE Pervasive Computing (IEEE Educational Activities Department)-Vol. 1, Iss: 3, pp 41-49
TL;DR: This quality-of-service-aware decision engine automatically negotiates for the appropriate adaptation decision for synthesizing an optimal content version for mobile devices.
Abstract: Building a good content adaptation service for mobile devices poses many challenges. To meet these challenges, this quality-of-service-aware decision engine automatically negotiates for the appropriate adaptation decision for synthesizing an optimal content version.

Summary (4 min read)

Content adaptation

  • The process begins when someone uses a mobile device to submit a request to the system-that is, to the content provider via an intermediary proxy server .
  • After the system identifies the user, it inputs context information to the decision engine, which resides on the proxy server.

Building a good content adaptation service for mobile devices poses many challenges. To meet these challenges, this quality-of-service-aware decision engine automatically negotiates for the appropriate adaptation decision for synthesizing an optimal content version.

  • Basis of some scoring scheme applied to different content versions, the decision engine then executes an algorithm that computes the optimal version that is renderable with the current client device and network characteristics.
  • "Version" at this stage means a set of desired settings such as color depth, scaling factor, and presentation format.
  • The decision engine sends the results to the transcoder, which generates the desired content version.
  • The intermediate proxy then sends the adapted content to the target device for rendering.

Contextualization

  • To design a good adaptation service, the authors must understand the environment sufficiently.
  • The authors can gain such an understanding through a contextualization framework that facilitates the expression and capturing of context information.
  • The CC/PP can describe the capabilities of a client device, called a user agent, and the user's specified preferences within the user agent's set of options.
  • For J2ME (Java 2 Platform, Micro Edition; http://java.sun.com/j2me) developers, the Connected Limited Device Configuration defines a standard Java platform for small, resource-constrained, connected devices and enables the dynamic delivery of Java applications and content to those devices.
  • The only profile currently developed for the CLDC configuration is the Mobile Information Device Profile.

Transcoding techniques

  • On a related front, considerable research has addressed techniques for different transcoding methods.
  • Little research, how-ever, has addressed having the decision engine provide quality of service-sensitive decisions to compensate for or minimize the losses due to transcoding.
  • A gap seems to exist between the declarative specification of the client characteristics (such as the CC/PP) and what the various transcoding techniques can achieve.
  • The authors propose a negotiation model that the decision engine would use to bridge this gap.
  • These operations should be managed according to strategies synthesized from all related contextual information sources.

Device-and user-specific preferences

  • Timothy Bickmore and Bill Schilit's research on device-independent access offers good insight on handling client device variability by staying away as much as possible from creating content versions specif-A client device's characteristics and capabilities are part of the context of a client environment where Web content rendering occurs.
  • Context includes any information that can characterize an entity's situation.
  • 2 Armando Fox and Eric Brewer's research is in the same vein and suggests that clients generally vary along three important dimensions: network, hardware, and software.
  • Also, forward-and-backward navigation access should be limited to the presenter only.
  • Applying this to content adaptation, the authors can imagine creating different views of the same content for different users according to their preferences.

Qualitative user preference and quantitative content value

  • Content adaptation systems' decisions can take into account numeric values associated with different content versions or transcoding strategies.
  • The user merely needs to specify the preference without any exact quantification.
  • To reduce the user's workload, assigning a numeric score to a particular content version should be automatic.
  • 9, 10 This value depends on the client device's resources that can be used to render the content.
  • Such a content score should lead to the best user satisfaction, because quality of service (QoS) is a user-oriented property.

The decision engine

  • The authors decision engine aims to increase users' satisfaction in their subscribing to Internet contents in a constrained mobile computing environment.
  • A separate paper describes the transcoding part of the system.
  • The decision engine tries to arrive at the best trade-off for content adaptation while minimizing content degradation due to lossy transcoding.
  • It is aware of different types of context information such as the user's preferences, the device's rendering capability, and the network connection's characteristics .
  • That is, the authors desire "zero administration" 6 on the client side.

Preprocessing

  • This stage occurs before the user request arrives.
  • A handheld device could display a PDF document in its original PDF format.
  • Also, different quality axes will have different QoS characteristics; such diversity makes capturing all the relevant characteristics quantitatively a nontrivial task.
  • This modeling applies saturation whereby qv will attain saturation near the far end of qs.
  • In contrast, increments of colors near the value of 2 colors will more greatly affect the perceived quality.

Score evaluation and representation.

  • Using quality axes, users can easily indicate their preferences.
  • So, the user will rank the color quality axis lower than the scaling axis.
  • These score nodes also contain the adaptation settings (the qs's) for possible subsequent generation of this content version.
  • The actual content comes into the picture only during transcoding.
  • At initialization time, the decision engine creates a search space consisting of all possible score nodes, which covers all the possible adaptation decisions that the decision engine can make.

Decision logic and score node selection.

  • A user's score nodes capture all the possible combinations of preference values in various quality domains.
  • The decision logic aims to find the best scoring node corresponding to a version of the content that is renderable given those parameters.
  • This is a negotiation process between the data structure containing the user's preference information and a decision engine's decision function .
  • In some cases, however, identifying a metric with the ordered-relation property is difficult.

SLL

  • The obvious choice of data structure for score nodes is a linked list where the elements are in descending order of scores.
  • To determine the optimal version of content with the highest score, the score linked list negotiation algorithm applies the simple search where decision yields either True or False at a particular node.
  • The resultant score node is optimal in that it has the highest score among all that are feasible to render in the present context.
  • The SLL algorithm is easy to implement and requires little housekeeping.

Score tree

  • The next two negotiation algorithms use a balanced binary tree (for example, a redblack tree) to reduce the worst-case search complexity to O(lg n) or O(tree height).
  • The score tree's main advantage over the score list is reduced real-time processing overhead.
  • Deploying the adaptation service involves frequent accesses of the data structure, so the fewer score nodes the decision engine must traverse, the better its performance.
  • Fortunately, in adaptation applications, once the tree is initialized, it seldom requires modification.
  • The initialization might incur some overhead, but this should not be too significant because initialization occurs only once.

ORST

  • The ordered-relation score tree negotiation algorithm works when the authors can identify an ordered-relation property for the decision logic.
  • It works similarly to the classic binary tree search but with the decision function dictating tree traversal.
  • During preprocessing, this algorithm marks each subtree with a value indicating the minimum resource that a node in that subtree requires.
  • The decision function can decide whether to visit a subtree by comparing this value with the client device's acceptable resource level.
  • This can lead to savings from not having to visit all the nodes.

NORST

  • When determining whether a metric has the ordered-relation property is difficult or impossible, the authors cannot model the decision logic with a comparison relation.
  • Using the linked-list data structure and the linked-list traversal in this case would guarantee the best score for the adaptation, but the traversal overhead would be O(n).
  • To achieve better efficiency, the authors allow a trade-off between the traversal overhead and the optimization's accuracy.
  • The authors can derive its optimization accuracy (of finding the optimal score node) and a bound on the probability that the optimal node is not returned (1 − optimization accuracy).

SLL-NORST

  • If the authors need guaranteed higher accuracy, they can use a mixed algorithm: the score linked list-nonordered relation score tree negotiation algorithm.
  • SLL-NORST can preserve SLL's optimization accuracy while exploiting NORST's reduced overhead over a series of requests.
  • The authors can define a threshold for the optimization accuracy (A threshold )-for example, 70 percent-such that when NORST's accuracy level falls below A threshold , the system will automatically switch to SLL.
  • This guarantees that the resulting optimization's accuracy will be bounded by this threshold value.

The PDF Document Content Adaptation System

  • The authors PDF Document Content Adapta-tion System is aware of the user context in five quality domains: color, downloading time, scaling, modality, and segment.
  • The scaling domain has four values corresponding to the output format: WML, HTML, bitmap, and PDF.
  • For the network context, the authors use parameters such as bandwidth and round-trip time of some popular communication channels-for example, Code Division Multiple Access, General Packet Radio Service, and Cellular Digital Packet Data.
  • In practice, techniques for automatically discovering the client device type (through, for example, HTTP headers), networking charac-teristics, and some means of client identification (explicit userid or cookies) would generate the necessary context information.

Network characteristics

  • The authors tested the system's adaptability to different network characteristics by varying bandwidth while keeping the other factors constant.
  • As Figure 7c shows, the system switches modality to suit the connection's current bandwidth to keep downloading time within the allowed tolerance.

Device capability

  • To test the system's ability to handle heterogeneous devices, the authors adjusted the device's memory buffer size to see whether the system will automatically return the optimal content version.
  • The results were similar to those for the network characteristics and closely agreed with their expectations.
  • Such function adaptation is much more complex than content adaptation.
  • A mixed approach can yield a productive balance between these two modes, leading to the most costeffective methodology for content synthesis, with the decision engine's guidance.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

1536-1268/02/$17.00 © 2002 IEEE
PERVASIVE
computing
41
A Context-Aware
Decision Engine for
Content Adaptation
M
obile-device users often wish
that they could access the vari-
ety of rich hypermedia content
that exists or will exist on the
Web. Given, however, these
devices’ constrained computational and rendering
power and cellular networks’ limited bandwidth,
effective Web content presentation will require new
computation patterns. The mismatch between rich
multimedia content and constrained client capabil-
ity presents a research challenge.
Mobile devices’ variety also increases the difficulty
of accessing content. For example, mobile devices
are conveniently sized to fit in a
pocket, but this size constrains
their display area.
1
Creating
trimmed versions of content could
get around this constraint, but dif-
ferences in display capabilities would easily make a
device-specific authoring approach too costly to be
practical.
2
Examples of device differences include
screen sizes ranging from 20 × 5 characters to thou-
sands of pixels, and color depths ranging from two-
line black-and-white display to full-color display.
3
Web content can also be encoded in many different
modes, such as JPEG, which best suits PCs, and the
1-bit WBMP format for Wireless Application Pro-
tocol (WAP) devices. Furthermore, mobile devices
support many different markup languages, includ-
ing HTML, HDML (Handheld Device Markup Lan-
guage), and WML (Wireless Markup Language). If
the client device subscribes to a Web site that uses a
presentational mode that the device cannot render,
information loss might result.
While users complain about the “World Wide
Wait” problem, owing partly to slow last-mile
speeds, cellular networks work at far lower data
rates, which work fine for plain text but are far from
adequate for Web pages.
4
Much Internet content and
various other types of multimedia information that
mobile applications running on PDAs or notebooks
use, such as for identifying locations (for example,
the nearest restaurant) or describing product displays
(for stock inventory), are unsuitable for smaller
devices such as WAP phones.
To tackle these problems, we propose a content
adaptation system. Such a system decides the opti-
mal content version for presentation and the best
strategy for deriving that version, and then gener-
ates that version. The system’s most crucial compo-
nent is the decision engine, which negotiates for that
strategy. The engine takes into account the entire
computing context (see the “Context Awareness”
sidebar), focusing particularly on the user’s prefer-
ences. A prototype PDF document adaptation sys-
tem demonstrates our approach’s viability.
Content adaptation
The process begins when someone uses a mobile
device to submit a request to the system—that is, to
the content provider via an intermediary proxy
server (see Figure 1). After the system identifies the
user, it inputs context information to the decision
engine, which resides on the proxy server. On the
Building a good content adaptation service for mobile devices poses
many challenges. To meet these challenges, this quality-of-service–aware
decision engine automatically negotiates for the appropriate adaptation
decision for synthesizing an optimal content version.
CONTEXT-AWARE COMPUTING
Wai Yip Lum and Francis C.M. Lau
University of Hong Kong

basis of some scoring scheme applied to
different content versions, the decision
engine then executes an algorithm that
computes the optimal version that is ren-
derable with the current client device and
network characteristics. “Version” at this
stage means a set of desired settings such as
color depth, scaling factor, and presenta-
tion format. The decision engine sends the
results to the transcoder, which generates
the desired content version. The inter-
mediate proxy then sends the adapted con-
tent to the target device for rendering.
Contextualization
To design a good adaptation service, we
must understand the environment suffi-
ciently. We can gain such an understand-
ing through a contextualization frame-
work that facilitates the expression and
capturing of context information.
One such framework is the Resource
Description Framework–based Compos-
ite Capability/Preference Profile (www.w3.
org/Mobile/CCPP). The CC/PP can de-
scribe the capabilities of a client device,
called a user agent, and the user’s specified
preferences within the user agent’s set of
options. It offers a mechanism for retriev-
ing capability and preference profiles via
the Web, from hardware or software ven-
dors. This approach reduces the amount
of information that the user agent must
directly send through a limited-bandwidth
communication channel.
For J2ME (Java 2 Platform, Micro Edi-
tion; http://java.sun.com/j2me) developers,
the Connected Limited Device Configura-
tion defines a standard Java platform for
small, resource-constrained, connected
devices and enables the dynamic delivery
of Java applications and content to those
devices. The only profile currently devel-
oped for the CLDC configuration is the
Mobile Information Device Profile.
5
Both the CC/PP and MIDP can provide
the declarative semantics needed to deter-
mine the adaptation settings for transcoding
to produce the optimized content for a spec-
ified user agent. So, this article is concerned
mainly with the procedural semantics.
Transcoding techniques
On a related front, considerable research
has addressed techniques for different
transcoding methods. Little research, how-
ever, has addressed having the decision
engine provide quality of service-sensitive
decisions to compensate for or minimize
the losses due to transcoding. A gap seems
to exist between the declarative specifica-
tion of the client characteristics (such as
the CC/PP) and what the various transcod-
ing techniques can achieve. We propose a
negotiation model that the decision engine
would use to bridge this gap. Negotiation
happens between a representation of the
user preferences and a decision function
based on real-time parameters.
In existing transcoding methods, oper-
ations such as compression, color depth
reduction, image scaling, and so on are
lossy
6
and cannot be blindly applied in all
application scenarios. These operations
should be managed according to strategies
synthesized from all related contextual
information sources.
Device- and user-specific preferences
Timothy Bickmore and Bill Schilit’s
research on device-independent access
offers good insight on handling client device
variability by staying away as much as pos-
sible from creating content versions specif-
42
PERVASIVE
computing
http://computer.org/pervasive
CONTEXT-AWARE COMPUTING
A
client device’s characteristics and capabilities are part of the
context of a client environment where Web content render-
ing occurs. Context includes any information that can characterize
an entity’s situation.
1
An entity could be a person, place, or object
that is relevant to interaction between a user and an application.
The user and the application themselves are such entities.
This definition makes designing the relevant context easier be-
cause we don’t have to examine the context’s implicit and explicit
nature.
2
Unlike human–human interaction, the distinction be-
tween implicit and explicit context information (for example, nod-
ding the head versus saying “Yes, I will drive you to the bank”) is
blurred or irrelevant for human–machine interaction because of
the semantic gap between machines and humans. Instead, the
concepts of qualitative and quantitative context information are
more applicable, as we discuss in the main article.
A system is context aware if it uses contexts to provide relevant
information or services to the user, where relevancy depends on
the user’s task.
1
This definition is more general than the ones that
Richard Hull and his colleagues
3
and Jason Pascoe and his col-
leagues
4
provided, whereby context-aware applications must de-
tect, interpret, and respond to contexts. Here, we consider only the
interpretation of and response to contexts. We leave the detection
mechanism’s sensor design to other discovery systems, which we
can cleanly separate from the main context-aware process.
REFERENCES
1. G.D. Abowd and A.K. Dey, “Towards a Better Understanding of Context
and Context-Awareness,” Proc. 1st Int’l Symp. Handheld and Ubiquitous
Computing (HUC 99), Lecture Notes in Computer Science, no. 1707,
Springer-Verlag, Heidelberg, Germany, 1999, pp. 304–307.
2. A. Schmidt, “Implicit Human Computer Interaction through Context,”
Personal Technologies, vol. 4, nos. 2–3, June 2000, pp. 191–199.
3. R. Hull, P. Neaves, and J. Bedford-Roberts, “Towards Situated Comput-
ing,” Proc. 1st Int’l Symp. Wearable Computers, IEEE CS Press, Los Alami-
tos, Calif., 1997, pp. 56–63.
4. J. Pascoe, “Adding Generic Contextual Capabilities to Wearable Com-
puters,” Proc. 2nd Int’l Symp. Wearable Computers, IEEE CS Press, Los
Alamitos, Calif., 1998, pp. 92–99.
Context awareness

ically for individual device types.
2
Armando
Fox and Eric Brewer’s research is in the same
vein and suggests that clients generally vary
along three important dimensions: network,
hardware, and software.
7
The practical
approach in response to such research
assumes that the application scenario’s basis
is the client device’s possible variations. In
this article, we also examine client variabil-
ity along the line of user perception.
Richard Han, Veronique Perret, and
Mahmoud Naghshineh have discussed the
concept of multiuser and multidevice brows-
ing, focusing on the specific application sce-
nario of content browsing in a lecture.
8
This
scenario required creating two different con-
tent views because the presenter might wish
to view his or her notes for each slide but
prevent the audience from viewing them.
Also, forward-and-backward navigation
access should be limited to the presenter
only. These separate views point to the need
for user-specific requirements in addition to
device-specific ones. Applying this to con-
tent adaptation, we can imagine creating
different views of the same content for dif-
ferent users according to their preferences.
Such a user-centric approach and the result-
ing content should increase user satisfaction.
Qualitative user preference and quan-
titative content value
Content adaptation systems’ decisions
can take into account numeric values asso-
ciated with different content versions or
transcoding strategies. Not all user prefer-
ences, however, are easily expressible in
terms of numeric values in a certain con-
text—for example, a user’s perception of
color. On the other hand, providing quali-
tative information on a user’s preference for
one quality domain over another would be
trivial. The user merely needs to specify the
preference without any exact quantification.
A preference relation (for example, rank-
ing) that connects different quality domains
can describe the qualitative information.
To reduce the user’s workload, assigning
a numeric score to a particular content ver-
sion should be automatic. To automate this
assignment, researchers have proposed
resource-based content value.
9,10
This value
depends on the client device’s resources that
can be used to render the content. In this
article, however, we follow a different direc-
tion; we base the content value on user pref-
erences. Such a content score should lead to
the best user satisfaction, because quality of
service (QoS) is a user-oriented property.
11
The decision engine
Our decision engine aims to increase
users’ satisfaction in their subscribing to
Internet contents in a constrained mobile
computing environment. The engine auto-
matically negotiates for the appropriate
content adaptation decisions that the
transcoder will use to generate the optimal
content version. A separate paper describes
the transcoding part of the system.
12
Because our QoS-sensitive approach
compensates for transcoding’s lossy nature,
6
it reduces the chance of serious loss of qual-
ity in various domains. The decision
engine tries to arrive at the best trade-off
for content adaptation while minimizing
content degradation due to lossy trans-
coding. It is aware of different types of
context information such as the user’s
preferences, the device’s rendering capa-
bility, and the network connection’s char-
acteristics (see Figure 2).
The engine relies on a user’s indication of
preferences according to his or her per-
ception in different quality domains. On
the basis of these preferences, the engine
can devise
A method to express quantitatively any
given content’s quality along various
quality axes
Algorithms to negotiate for an optimal
content version
JULY–SEPTEMBER 2002
PERVASIVE
computing
43
Client device
Wireless
connection
Intermediary proxy server
User context
Gateway
Decision
engine
Transcoder
Caching
proxy
Content profile
Network
context
HTTP
Content provider
Figure 1. Content adaptation’s overall
structure.

with some guarantee on the returned
objects’ QoS.
The critical issue in designing a content
adaptation system is how to determine a
trade-off that guarantees the desired QoS
by interpolating from context information,
without modifying the underlying system,
including the client device and the content
provider. That is, we desire “zero adminis-
tration”
6
on the client side. In our decision
engine, content negotiation performs this
trade-off. This process takes into account
factors such as processing overhead, opti-
mization accuracy, and context awareness.
Content negotiation
Content negotiation has two stages: pre-
processing and real-time processing (see
Figure 3).
Preprocessing
This stage occurs before the user re-
quest arrives.
Data-type analysis. The set of quality
axes for different types of multimedia con-
tent forms the working properties that pre-
cisely define the QoS for any multimedia
content. We view the quality of a specific
version of an object as a point in an n-
dimensional space, where n is the number
of different qualities.
For example, take the QoS of a PDF
document:
QoS
pdf–document
= (color, scaling, segment, download-
ing-time, modality)
Modality addresses the change in the content’s
presentation scheme to render the content
in diverse devices (see Figure 4). For exam-
ple, a handheld device could display a PDF
document in its original PDF format. How-
ever, considering the cellular network’s
constrained bandwidth and the handheld
device’s limited resources, it is advisable to
convert the document to a format that the
device can comfortably render, such as
WML, which many WAP devices support.
Quantitative analysis of quality. To facil-
itate the expression and automatic pro-
cessing of QoS parameters, we need a
quantitative approach to characterizing
QoS in any axis. We define a metric based
on quality value. Take the color quality
axis as an example. We assign an 8-bit
color image a larger qv than that of a 1-bit
black-and-white image. However, express-
ing or measuring quantitatively the loss of
qv in terms of colors is not as straightfor-
ward. Also, different quality axes will have
different QoS characteristics; such diver-
sity makes capturing all the relevant char-
acteristics quantitatively a nontrivial task.
We use first order or second order mod-
eling to monitor the change of qv against
the quantization step qs. A quantization
44
PERVASIVE
computing
http://computer.org/pervasive
CONTEXT-AWARE COMPUTING
User preference profile
Color perception
Scaling perception
Segment perception
Modality perception
Timing perception
Threshold time
Content metadata
Spatial size
Content purpose
Available modalities
Available color depths
Transcoding strategies
PDF to HTML
PDF to image
BMP to WBMP
HTML to WML
Color depth reduction
Image scaling
Image cropping
HTML segmentation
Device capabilty profile
Buffer size
Supported color depth
Supported encoding modes
Supported syntaxes
Screen dimension
Networking parameters
Bandwidth
Round-trip time
Negotiation
process
Figure 2. From contextual information to transcoding strategies.
User
preferences
Content
metadata
Data type
analysis
Score node
representation
scheme initialization
Score node
representation
scheme
To transcoding modules
Adaptation strategies
Score
evaluation
Preprocessing
Real-time processing
Score node
selection
algorithm
Decision logic
Device
capabilities
Networking
parameters
Figure 3. Content negotiation’s two stages: preprocessing and real-time processing.

step is a step in a scale covering the possi-
ble settings of a particular quality axis—for
example, 2, 16, 256, and so on for color.
For first-order modeling, qv increases
or decreases linearly as qs increases or
decreases. Second-order modeling charac-
terizes the modeling curve as a second-
order equation. This modeling applies sat-
uration whereby qv will attain saturation
near the far end of qs. At or near satura-
tion, qv will insignificantly increase or
decrease when qs changes. Consider again
the color quality axis. When qs is at 16-bit
colors (25,536 colors), adding more col-
ors will insignificantly affect qv. In con-
trast, increments of colors near the value
of 2 colors will more greatly affect the per-
ceived quality. We expect most user pref-
erences will exhibit such a saturation pat-
tern. To model quality domains that do not
have such a saturation behavior—for
example, the modality quality axis—we
can use first-order equations.
Together, these two models can capture
most if not all of the behavior of the most
common quality axes. If users have a strong
sensitivity to a quality axis that these two
models do not cover, they can input their
desired modeling curves in the system.
Score evaluation and representation.
Using quality axes, users can easily indi-
cate their preferences. For example, a user
might have a weak sensitivity to color but
a strong sensitivity to difference in dimen-
sional size. So, the user will rank the color
quality axis lower than the scaling axis.
Users can input their specific preference
(through ranking) for each quality axis,
and each axis’s relative weight can then be
calculated. An aggregate score can then be
computed based on these weights.
Having the user manually assign the
score to each possible version of content
would be difficult and impractical. Also,
using only resource utilization measures to
determine the score is unreasonable.
9,10
We
offer a mechanism that automatically
assigns a score to a content version, taking
into account user perceptions in different
quality domains. The set of aggregate scores
serves as the main input for the decision
engine to determine the optimal version.
Scores corresponding to different ver-
sions must be stored in some organized
structure to facilitate efficient searching.
This structure is either a linked list or a
tree where each node represents a content
version and stores the corresponding
score. These score nodes also contain the
adaptation settings (the qss) for possible
subsequent generation of this content ver-
sion. A score node is not tied to any spe-
cific Web content; neither does it replicate
any actual content from the content
provider. It is generic and applicable to any
content. The actual content comes into the
picture only during transcoding. The same
is true of the global structure containing
all the score nodes.
At initialization time, the decision engine
creates a search space consisting of all pos-
sible score nodes, which covers all the pos-
sible adaptation decisions that the decision
engine can make. The engine computes
each node’s score during preprocessing
when the user registers and specifies his or
her preferences. This computation can
occur during preprocessing because of the
score–version association’s static nature.
That is, we expect that users will rarely
change their preferences for the various
quality domains. The decision engine pre-
processes the established score node space
into a suitable data structure such that
when the adaptation system receives a
request, the real-time heuristic search will
be effective and efficient. The construction
and initialization of this structure repre-
sents the end of preprocessing.
Real-time processing
This stage processes the user’s request.
Decision logic and score node selection. A
user’s score nodes capture all the possible
combinations of preference values in vari-
ous quality domains. They do not include
values of real-time parameters such as the
characteristics (metadata) of the Web object
being requested, the network connection’s
characteristics, and the device’s capability.
The decision logic aims to find the best scor-
ing node corresponding to a version of the
content that is renderable given those para-
meters. This is a negotiation process be-
tween the data structure containing the
user’s preference information and a decision
engine’s decision function (see Figure 5).
During negotiation, the negotiation algo-
rithm systematically traverses the score
nodes. This iterative heuristic search tries to
find the most optimal score node. To locate
the optimal node, the negotiation algorithm
examines each score node and generates a
binary decision (True or False) based on the
client device’s capability, the network para-
meters, the adaptation settings in the score
node, and the content itself:
T || F
decision(score-node, content-metadata,
network-parameters, device-capability)
Score-node provides the settings of the qual-
ity axes. The binary decision True indicates
that if the adaptation system transcodes
the content according to the score node’s
adaptation settings, the target device will
be able to render it in the current network
environment. The binary decision False
indicates otherwise. The decision function,
decision(), negotiates iteratively with the score
node data structure until it finds a satis-
factory score node with a True decision.
JULY–SEPTEMBER 2002
PERVASIVE
computing
45
Document
TextPDF encoding Image
HTML WMLJPEG WBMPBMP
Semantics
Encoding
mode
Syntax
PDF
Figure 4. The concept of modality.

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors draw on social exchange theory and heuristic-systematic model to examine how peer-to-peer (P2P) lending firms can enhance their customer acquisition by achieving mobile social media popularity.
Abstract: The purpose of this paper is to draw on social exchange theory and heuristic–systematic model to examine how peer-to-peer (P2P) lending firms can enhance their customer acquisition by achieving mobile social media popularity.,Content data collected from multiple sources (websites and mobile applications) were employed to validate the research model.,The mobile social media popularity of P2P lending firms positively influences their customer acquisition. Furthermore, the heuristic cues (i.e. source credibility and content freshness) and the systematic cue (i.e. transaction relevance) potentially affect the firms’ mobile social media popularity.,Mobile social media is not only a platform for firms’ image-building but a critical means of acquiring actual customers. The appropriate use of heuristic–systematic cues in a mobile interface is useful for firms to achieve high user popularity despite the challenges derived from the mobile context.,To achieve higher user popularity in the competitive online world, firms should dedicate greater effort in determining the adequate heuristic–systematic cues designed for the interface of their mobile social media account. The effect of popularity can then help the firms acquire more customers.,This study extends the understanding of social exchange in the context of mobile social media accounts and enriches the knowledge on business value of mobile social media popularity. This paper also contributes to the literature by relating heuristic–systematic cues to firms’ mobile social media popularity.

9 citations

Proceedings ArticleDOI
29 Nov 2009
TL;DR: An algorithm called Multimedia Adaptation Graph Generator (MAGG) is presented that composes distributed multimedia adaptation services that are tested in a Distributed Content Adaptation Framework (DCAF) prototype and its experimental result is presented.
Abstract: Content adaptation is an attractive and effective solution to resolve the mismatch of resources and properties between the delivery context and the multimedia content in heterogeneous environments. The problem with multimedia content adaptation is that there is no a single complete software solution that can satisfy all types of required adaptation needs. In order to solve this problem adaptation tools can be developed as services for example using Web Services and make them accessible via standard web protocols. Since adaptation is a multi-step process, an adaptation need can be realized as composition of a number of adaptation services. However, it becomes difficult when there are several services to realize an adaptation need which leads to different composition possibilities. In this paper we present an algorithm called Multimedia Adaptation Graph Generator (MAGG) that composes distributed multimedia adaptation services. The algorithm is tested in a Distributed Content Adaptation Framework (DCAF) prototype and its experimental result is presented.

9 citations


Cites background from "A context-aware decision engine for..."

  • ...In general these works can be categorized into two main approaches: static and dynamic [15]....

    [...]

Proceedings ArticleDOI
23 Sep 2008
TL;DR: This paper presents a context-driven content adaptation planner, which dynamically transforms requested Web content into a proper format conforming to receiving contexts, and applies description logics to formally define context profiles and requirements and automate content adaptation decision.
Abstract: This paper presents our design and development of a context-driven content adaptation planner, which dynamically transforms requested Web content into a proper format conforming to receiving contexts (e.g., access condition, network connection, and receiving device). Aiming to establish a semantic foundation for content adaptation, we apply description logics (DLs) to formally define context profiles and requirements and automate content adaptation decision. In addition, the computational overhead caused by content adaptation can be moderately decreased through the reduction of the size of adapted content.

9 citations


Cites methods from "A context-aware decision engine for..."

  • ...A number of context-based adaptation methods [18][12][19][20][21][16][22] are proposed to...

    [...]

Journal ArticleDOI
TL;DR: The experimental results show that the RESP framework can approximate the optimal cache replacement with much lower execution time for processing user queries.
Abstract: The technology advance in network has accelerated the development of multimedia applications over the wired and wireless communication. To alleviate network congestion and to reduce latency and workload on multimedia servers, the concept of multimedia proxy has been proposed to cache popular contents. Caching the data objects can relieve the bandwidth demand on the external network, and reduce the average time to load a remote data object to local side. Since the effectiveness of a proxy server depends largely on cache replacement policy, various approaches are proposed in recent years. In this paper, we discuss the cache replacement policy in a multimedia transcoding proxy. Unlike the cache replacement for conventional web objects, to replace some elements with others in the cache of a transcoding proxy, we should further consider the transcoding relationship among the cached items. To maintain the transcoding relationship and to perform cache replacement, we propose in this paper the RESP framework (standing for REplacement with Shortest Path). The RESP framework contains two primary components, i.e., procedure MASP (standing for Minimum Aggregate Cost with Shortest Path) and algorithm EBR (standing for Exchange-Based Replacement). Procedure MASP maintains the transcoding relationship using a shortest path table, whereas algorithm EBR performs cache replacement according to an exchanging strategy. The experimental results show that the RESP framework can approximate the optimal cache replacement with much lower execution time for processing user queries.

8 citations


Additional excerpts

  • ...Such a technique is regarded as content adaptation [16]....

    [...]

Dissertation
01 Jan 2006
TL;DR: Die Bildqualitat der historischen Schwarz-Weis-Filme unterscheidet sich signifikant von der Qualitat aktueller Videos, so dass eine verlassliche Analyse mit bestehenden Verfahren haufig nicht moglich.
Abstract: Der Ubergang von analogen zu digitalen Videos hat in den letzten Jahren zu grosen Veranderungen innerhalb der Filmarchive gefuhrt. Insbesondere durch die Digitalisierung der Filme ergeben sich neue Moglichkeiten fur die Archive. Eine Abnutzung oder Alterung der Filmrollen ist ausgeschlossen, so dass die Qualitat unverandert erhalten bleibt. Zudem wird ein netzbasierter und somit deutlich einfacherer Zugriff auf die Videos in den Archiven moglich. Zusatzliche Dienste stehen den Archivaren und Anwendern zur Verfugung, die erweiterte Suchmoglichkeiten bereitstellen und die Navigation bei der Wiedergabe erleichtern. Die Suche innerhalb der Videoarchive erfolgt mit Hilfe von Metadaten, die weitere Informationen uber die Videos zur Verfugung stellen. Ein groser Teil der Metadaten wird manuell von Archivaren eingegeben, was mit einem grosen Zeitaufwand und hohen Kosten verbunden ist. Durch die computergestutzte Analyse eines digitalen Videos ist es moglich, den Aufwand bei der Erzeugung von Metadaten fur Videoarchive zu reduzieren. Im ersten Teil dieser Dissertation werden neue Verfahren vorgestellt, um wichtige semantische Inhalte der Videos zu erkennen. Insbesondere werden neu entwickelte Algorithmen zur Erkennung von Schnitten, der Analyse der Kamerabewegung, der Segmentierung und Klassifikation von Objekten, der Texterkennung und der Gesichtserkennung vorgestellt. Die automatisch ermittelten semantischen Informationen sind sehr wertvoll, da sie die Arbeit mit digitalen Videoarchiven erleichtern. Die Informationen unterstutzen nicht nur die Suche in den Archiven, sondern fuhren auch zur Entwicklung neuer Anwendungen, die im zweiten Teil der Dissertation vorgestellt werden. Beispielsweise konnen computergenerierte Zusammenfassungen von Videos erzeugt oder Videos automatisch an die Eigenschaften eines Abspielgerates angepasst werden. Ein weiterer Schwerpunkt dieser Dissertation liegt in der Analyse historischer Filme. Vier europaische Filmarchive haben eine grose Anzahl historischer Videodokumentationen zur Verfugung gestellt, welche Anfang bis Mitte des letzten Jahrhunderts gedreht und in den letzten Jahren digitalisiert wurden. Durch die Lagerung und Abnutzung der Filmrollen uber mehrere Jahrzehnte sind viele Videos stark verrauscht und enthalten deutlich sichtbare Bildfehler. Die Bildqualitat der historischen Schwarz-Weis-Filme unterscheidet sich signifikant von der Qualitat aktueller Videos, so dass eine verlassliche Analyse mit bestehenden Verfahren haufig nicht moglich ist. Im Rahmen dieser Dissertation werden neue Algorithmen vorgestellt, um eine zuverlassige Erkennung von semantischen Inhalten auch in historischen Videos zu ermoglichen.

8 citations


Cites background from "A context-aware decision engine for..."

  • ...Die Adaption wird auf einem Server [359, 208, 387], einem Proxy [186, 335] oder direkt auf dem Client [301] durchgeführt....

    [...]

References
More filters
Proceedings ArticleDOI
27 Sep 1999
TL;DR: Some of the research challenges in understanding context and in developing context-aware applications are discussed, which are increasingly important in the fields of handheld and ubiquitous computing, where the user?s context is changing rapidly.
Abstract: When humans talk with humans, they are able to use implicit situational information, or context, to increase the conversational bandwidth. Unfortunately, this ability to convey ideas does not transfer well to humans interacting with computers. In traditional interactive computing, users have an impoverished mechanism for providing input to computers. By improving the computer’s access to context, we increase the richness of communication in human-computer interaction and make it possible to produce more useful computational services. The use of context is increasingly important in the fields of handheld and ubiquitous computing, where the user?s context is changing rapidly. In this panel, we want to discuss some of the research challenges in understanding context and in developing context-aware applications.

4,842 citations

Journal ArticleDOI
TL;DR: In this article, an XML-based language to describe implicit human-computer interaction (HCI) is proposed, using contextual variables that can be grouped using different types of semantics as well as actions that are called by triggers.
Abstract: In this paper the term “implicit human-computer interaction” is defined. It is discussed how the availability of processing power and advanced sensing technology can enable a shift in HCI from explicit interaction, such as direct manipulation GUIs, towards a more implicit interaction based on situational context. In the paper, an algorithm is given based on a number of questions to identify applications that can facilitate implicit interaction. An XML-based language to describe implicit HCI is proposed. The language uses contextual variables that can be grouped using different types of semantics as well as actions that are called by triggers. The term of perception is discussed and four basic approaches are identified that are useful when building context-aware applications. Two examples, a wearable context awareness component and a sensor-board, show how sensor-based perception can be implemented. It is also discussed how situational context can be exploited to improve input and output of mobile devices.

685 citations

Journal ArticleDOI
TL;DR: This work presents a system that adapts multimedia Web documents to optimally match the capabilities of the client device requesting it using a representation scheme called the InfoPyramid that provides a multimodal, multiresolution representation hierarchy for multimedia.
Abstract: Content delivery over the Internet needs to address both the multimedia nature of the content and the capabilities of the diverse client platforms the content is being delivered to. We present a system that adapts multimedia Web documents to optimally match the capabilities of the client device requesting it. This system has two key components. 1) A representation scheme called the InfoPyramid that provides a multimodal, multiresolution representation hierarchy for multimedia. 2) A customizer that selects the best content representation to meet the client capabilities while delivering the most value. We model the selection process as a resource allocation problem in a generalized rate distortion framework. In this framework, we address the issue of both multiple media types in a Web document and multiple resource types at the client. We extend this framework to allow prioritization on the content items in a Web document. We illustrate our content adaptation technique with a web server that adapts multimedia news stories to clients as diverse as workstations, PDA's and cellular phones.

652 citations

Proceedings ArticleDOI
Jason Pascoe1
19 Oct 1998
TL;DR: A prototype application has been constructed to explore how some of the contextual capabilities of the Contextual Information Service could be deployed in a wearable system designed to aid an ecologist's observations of giraffe in a Kenyan game reserve.
Abstract: Context-awareness has an increasingly important role to play in the development of wearable computing systems. In order to better define this role we have identified four generic contextual capabilities: sensing, adaptation, resource discovery, and augmentation. A prototype application has been constructed to explore how some of these capabilities could be deployed in a wearable system designed to aid an ecologist's observations of giraffe in a Kenyan game reserve. However, despite the benefits of context-awareness demonstrated in this prototype, widespread innovation of these capabilities is currently stifled by the difficulty in obtaining the contextual data. To remedy this situation the Contextual Information Service (CIS) is introduced. Installed on the user's wearable computer, the CIS provides a common point of access for clients to obtain, manipulate and model contextual information independently of the underlying plethora of data formats and sensor interface mechanisms.

615 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: Digestor is a software system which automatically re-authors arbitrary documents from the world-wide web to display appropriately on small screen devices such as PDAs and cellular phones, providing device-independent access to the web.
Abstract: Digestor is a software system which automatically re-authors arbitrary documents from the world-wide web to display appropriately on small screen devices such as PDAs and cellular phones, providing device-independent access to the web. Digestor is implemented as an HTTP proxy which dynamically re-authors requested web pages using a heuristic planning algorithm and a set of structural page transformations to achieve the best looking document for a given display size.

404 citations

Frequently Asked Questions (1)
Q1. What have the authors contributed in "A context-aware decision engine for content adaptation" ?

To tackle these problems, the authors propose a content adaptation system. A prototype PDF document adaptation system demonstrates their approach ’ s viability.