scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A quality-of-service specification for multimedia presentations

01 Nov 1995-Multimedia Systems (Springer-Verlag New York, Inc.)-Vol. 3, Iss: 5, pp 251-263
TL;DR: It is shown how to define formal QOS constraints from a specification of ideal presentation outputs, and this definition enables meaningful requests for endto-end service guarantees, while leaving the database system free to optimize resource management.
Abstract: The bandwidth limitations of multimedia systems force trade-offs between presentation-data fidelity and real-time performance. For example, digital video is commonly encoded with lossy compression to reduce bandwidth, and frames may be skipped during playback to maintain synchronization. These trade-offs depend on device performance and physical data representations that are hidden by a database system. If a multimedia database is to support digital video and other continuous media data types, we argue that the database should provide a quality-of-service (QOS) interface to allow application control of presentation timing and information-loss trade-offs. This paper proposes a data model for continuous media that preserves device and physical data independence. We show how to define formal QOS constraints from a specification of ideal presentation outputs. Our definition enables meaningful requests for endto-end service guarantees, while leaving the database system free to optimize resource management. We propose one set of QOS parameters that constitute a complete model for presentation error, and we show how this error model extends the opportunities for resource optimization.

Summary (2 min read)

1 Introduction

  • The next section defines their terminology in terms of an architectural model for multimedia presentations.
  • Sections 3 and 4 describe a data model for the specification of content and view respectively for a presentation.

[R,l]

  • The Interval schema gives a start position and an interval extent.
  • The authors use this information to specify both clipping intervals and linear transformations.
  • This schema must contain the maximal set of dimensions for all output types.
  • When used for audio specifications, the authors simply ignore the x and y intervals.
  • The Space schema specifies intervals for t, x, and y coordinate dimension and a z interval for the output range.

start, end, duration: Content

  • For a given content specification, the logical function returns a relation between a point in the logical output space and the acceptable output values for that point.
  • Note that specifications reduce the set of acceptable values and where nothing is specified, all values are acceptable.
  • Each cat construct specifies a single logical output with a sequence of clip constructs.
  • Each clip specifies a portion of a transform construct and each transform construct defines the logical dimensions of a basic media source.
  • This definition of content satisfies their goal of a data model for complex presentations except that the authors have no way to relate the logical content to actual presentation outputs.

[AudioDev, VideoDev]

  • The logical dimensions in a content specification are generally not the same as the physical dimensions of the view.
  • The Output schema declares a field tr that defines the transformation from logical to view output dimensions and a field clip that defines clipping bounds for view outputs.
  • The clipping bounds for both audio and video match the full range of the transformed content.
  • This asymmetry is necessary to preserve the content synchronization while allowing flexibility in the display of multiple logical outputs.

)) .(d,x,y,t,z)}

  • The implementation of a presentation plan uniquely determines the value for every device at every point and time.
  • The vVal function takes a VideoDev and integer values for the clock, X, and y coordinates, and returns the integer value at that pixel.
  • The authors define a function actual that takes a particular presentation and returns a relation representing these output values.
  • The authors are assuming that they can observe only one output value per clock tick and that the output value is constant over the duration of a clock cycle.
  • The relation actual P and the relation ideal c v have the same type.

5 Quality Specification

  • To calibrate this quality function to approximate user perception, the authors can adjust the values returned by the calib and synchCalib functions in the quality specification.
  • When an error component equals the corresponding critical error value the quality is at most e-1 or approximately 0.37.
  • The units for temporal shift, jitter, res, and synch are in seconds.
  • Comparable values have been reported by other researchers for synchronization error [16] , but more experimentation is needed to determine how these values depend on the content, task, and the person who rates the quality.

6 Using Quality Specifications for Resource Reservation

  • Analysis of a QOS specification can identify a range of presentation plans that might satisfy the specification as illustrated above.
  • A multimedia player can perform this analysis automatically in response to playback requests.
  • To guarantee that a particular presentation plan will satisfy a QOS specification a player must reserve resources for storage access, decompression, mixing, and presentation processes.
  • The admission test may invoke resource reservation protocols for network and file system resources with resourcelevel QOS parameters derived from the process timing requirements.
  • If the player can not find a presentation plan that both satisfies the QOS requirements and meets the admission test, then the QOS requirements must be renegotiated.

7 Conclusions

  • The authors have implemented simple playback systems that make use of this QOS specification method.
  • More work is planned to investigate algorithms for translating QOS specifications into feasible presentation plans.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Portland State University Portland State University
PDXScholar PDXScholar
Computer Science Faculty Publications and
Presentations
Computer Science
11-1995
Quality of Service Speci=cation for Multimedia Quality of Service Speci=cation for Multimedia
Presentations Presentations
Richard Staehli
Oregon Graduate Institute of Science & Technology
Jonathan Walpole
Oregon Graduate Institute of Science & Technology
David Maier
Oregon Graduate Institute of Science & Technology
Follow this and additional works at: https://pdxscholar.library.pdx.edu/compsci_fac
Part of the Computer Engineering Commons, and the Databases and Information Systems Commons
Let us know how access to this document bene=ts you.
Citation Details Citation Details
Staehli, Richard; Walpole, Jonathan; and Maier, David, "Quality of Service Speci=cation for Multimedia
Presentations" (1995).
Computer Science Faculty Publications and Presentations
. 68.
https://pdxscholar.library.pdx.edu/compsci_fac/68
This Post-Print is brought to you for free and open access. It has been accepted for inclusion in Computer Science
Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can
make this document more accessible: pdxscholar@pdx.edu.

Quality
of
Service
Specification
for
Multimedia
Presentations*
Richard Staehli, Jonathan Walpole and David Maier
{staehli, walpole, maier}@cse.ogi.edu
Department of Computer Science & Engineering
Oregon Graduate Institute of Science & Technology
20000
N.W. Walker Rd., PO Box 91000
Portland,
OR
97291-1000
ABSTRACT
The bandwidth limitations of multimedia systems force tradeoffs between presenta-
tion
data
fidelity and real-time performance. For example, digital video
is
commonly
encoded with lossy compression to reduce bandwidth
and
frames may be skipped during
playback to maintain synchronization. These tradeoffs depend on
device performance
and physical
data
representations
that
are hidden
by
a database system.
If
a multimedia
database
is
to support digital video and other continuous media
data
types,
we
argue
that
the database should provide a Quality of Service (QOS) interface
to
allow application
control of presentation timing and information loss tradeoffs.
This paper proposes a
data
model for continuous media
that
preserves device and
physical
data
independence.
We
show how
to
define formal QOS constraints from a
specification of ideal presentation outputs.,
Our definition enables meaningful requests
for
end-to-end service guarantees while leaving the database system
free
to optimize
resource management.
We
propose one set of QOS parameters
that
constitute a complete
model for presentation error and show
how
this error model extends the opportunities
for resource optimization.
Keywords:
Data
Model, Synchronization, Resource Reservations.
1
Introduction
Multimedia database systems are being extended to support presentations of continuous me-
dia
[8],
such as video and audio, as
well
as synthetic compositions such as slide shows and computer-
generated music.
We
call these presentations time-based because they communicate
part
of their
information content through presentation timing. While applications with text and numeric
data
types expect correct results from database queries, the real-time constraints of time-based presen-
tations commonly make
it
impossible to return complete and correct results. Some information loss
is also inevitable in any conversion
of
continuous media between analog and digital representations.
Consider the reproduction of NTSC video in a digital multimedia system.
The
video stream is
typically captured
at
640x480 24-bit samples/frame and
30
frames/second,
but
it is rarely stored or
played back
at
this bandwidth. Instead, lossy compression algorithms such
as
the MPEG encoding
[7]
are used
to
reduce the bandwidth requirements in exchange for some loss in quality. In addition,
if the display window
is
smaller
than
640x480 then the presentation will lose even more of the
This
research
is
supported
by
NSF
Grants
IRI-9223188
and
IRI-9111008,
and
by funds from Tektronix, Inc.
and
the
Oregon Advanced
Computing
Institute.
1

encoded
data
resolution. Contention for shared resources between applications also contributes
to
bandwidth restrictions and timing errors. Real-time MPEG players commonly drop late frames
rather
than
delay
the
remainder of the presentation.
Since
the
usefulness
of
time-based presentations depends on the accuracy of
both
timing and
data,
computing the result of a query in a multimedia database is a question
of
quality rather
than
correctness. Where database design has traditionally been concerned with the delivery of correct
results with acceptable delay, multimedia systems present a new challenge:
to
deliver results with
acceptable quality in real-time.
But
how accurate must a presentation be
to
be acceptable, and
how can
we
guarantee
that
a presentation achieves
that
accuracy? This paper helps to answer the
first question by giving a formal definition
of
presentation quality
that
measures
both
accuracy of
timing and the accuracy
of
output
values. This definition
of
presentation quality
is
then used
to
specify presentation-level QOS requirements.
The
second question can be answered by a variety
of
the techniques found in existing systems
[15].
However,
we
argue
that
current systems take an ad
hoc approach
to
presentation quality.
Without
a specification of presentation QOS requirements,
multimedia systems have no way of saying whether a presentation is acceptable or not.
As
a result,
these systems tend either
to
be overly conservative, wasting resources in an
attempt
to
guarantee
maximal quality; or overly liberal, accommodating resource shortages by unconstrained degradation
of
quality.
The
most common multimedia presentation tools use a best-effort approach, which aggressively
consumes resources
to
present all
data
as promptly as possible. When resources are overloaded,
a best-effort presentation will lose information. Many researchers have demonstrated best-effort
systems
that
maintain approximate synchronization despite variable latencies and resource avail-
ability
[6,
13,
2].
These systems show
that
a presentation can be acceptable even when quality
degradation is noticeable,
but
we
observe two problems with a best-effort approach. First, if per-
fect presentation is not necessary, why should a multimedia system expend extra "effort" for
the
best quality? Second, how much quality degradation can be allowed when many real-time presenta-
tions compete for scarce resources?
If
any application
is
to
be guaranteed acceptable service, some
information is needed
about
presentation QOS requirements.
Performance guarantees are an essential feature of real-time systems, including time-based pre-
sentations. While best-effort approaches offer only weak guarantees for synchronization, strong QOS
guarantees for continuous media presentations have been demonstrated through reservation
of
pro-
cessor, memory, network and storage system resources
[11,
19,
1,9,
12,3].
The
resource reservations
are derived from low-level
QOS parameters such as throughput, delay and
jitter
requirements for
stream processing. We call this a
guaranteed-best approach when reservations are based on require-
ments for a best-quality presentation.
The
primary problem with this approach
is
that
it
is
often too
expensive. For example, an instructional video with slow and deliberate motions may be digitized
and stored
at
30
frames/second,
but
playback at
15
frames/second is adequate for the purpose
of
instruction.
The
requirements for presentation quality depend not on the
data
type or view,
but
on
the
purpose of a presentation.
Others have recognized
that
best-quality presentations are often too expensive and unnecessary.
The
Capacity-Based-Session-Reservation-Protocol (CBSRP)
[17]
supports reservation of processor
bandwidth from the specification of a range
of
acceptable spatial and temporal resolutions for video
playback requests.
The
resolution parameters are intended only to provide a few classes
of
service
based on resource requirements and
not
to
completely capture presentation quality requirements.
Hutchinson et al.
[5]
suggest a framework of categories for high-level QOS specifications
that
include
reliability, timeliness, volume, criticality, quality
of
perception and even cost. They provide only a
partial list
of
QOS parameters
to
show
that
current QOS support in OSI and
CCITT
standards
is
severely limited.
The
error model
we
describe in this paper extends their approach
to
provide
a complete set
of
parameters
to
constrain presentation quality.
We
define presentation quality to
include only factors
that
affect perception of
the
information content
of
a time-based presentation.
Database technology offers many benefits for multimedia applications, such as high-level query
2

languages, concurrency, and device and physical
data
independence.
But
current database systems
do not adequately support time-based presentations. Relational
data
manipulation languages have
demonstrated the value
of
letting the application specify what is wanted, and letting the database
plan
how to retrieve it. To support time-based presentations, a
data
manipulation language for a
multimedia database should also allow the application to specify
when, where, and how precisely
the
data
should be delivered
[10].
These constraints on delivery are an example
of
a QOS-based
interface.
None of the proposed
data
models for time-based multimedia
that
we
are aware of support
queries for imprecise results. For example, Gibbs describes a
data
model
that
captures the structure
and synchronization relationships of complex time-based multimedia presentations
[4].
This model
includes media descriptors
that
attach a quality factor, such as "VHS quality" or "CD quality",
to each media object, but these labels describe the quality of the representation rather
than
the
presentation. Without the notion
of
presentation quality in the
data
model, one would presume
that
all information would be preserved in the result
of
a query. In practice, information loss in a
time-based presentation is inevitable and unconstrained by current
data
models.
This paper defines a methodology for presentation
QOS specification.
The
definitions are
intended to be general enough to apply to presentations in any multimedia system. In particular,
our methodology supports the following goals:
Model
user
perception
of
quality.
Just
as modern compression algorithms exploit knowl-
edge of human perception
[18],
a multimedia system can better optimize playback resources if
it
knows which optimizations have the least affect on perceived quality.
Formal
semantics.
We
would like to be able to prove
that
multimedia system can satisfy a
QOS specification.
Complex
data
model.
QOS specifications can be defined
for
a large class of complex mul-
timedia presentations.
The next section defines our terminology in terms of an architectural model for multimedia
presentations. Sections 3 and 4 describe a
data
model for the specification of content and view
respectively
for
a presentation.
We
then define quality in Section 5 as a function of a presentation's
fidelity
to
the content and view specification, in the context
of
an error model.
We
define one
possible error model, and suggest in
Section 6 how a formal QOS specification can be used to
optimize resource usage in a presentation.
Section 7 gives our conclusions.
2
Architectural
Model
In our architectural model, shown in Figure
1,
multimedia
data
come from live sources
or
from storage. Digital audio and video
data
have default content specifications associated with them
that
specify the sample size and
rate
for
normal playback. A time-based media editor may be used
to create complex presentations from simple content. A
player
is
used to browse and play-back
content specified by the editor. A user may control a player's
view parameters, such as window
size and playback rate, as
well
as quality parameters such as spatial and temporal resolution. The
combination of content, view, and quality specifications constitute a
QOS specification. When a
user chooses to begin a presentation, the player needs to verify
that
a presentation plan consisting
of real-time tasks will satisfy the
QOS
specification. A presentation plan
is
feasible if guarantees
can be obtained from a
Resource Manager for the real-time presentation tasks
that
transport and
transform the multimedia
data
from storage or other
data
sources to the system outputs.
This architecture
is
similar
to
other research systems
that
provide QOS guarantees based on
an admission test
[13].
However, our definition of QOS
is
novel in
that
we
make strong distinctions
between content, view
1 and quality specifications. A content specification defines a set
of
logical
image and audio
output
values as a function of logical time. A view specification maps content
onto a set of physical display regions and audio
output
devices over a real-time interval. Quality is
3

Figure
1:
An architecture for editing and viewing multimedia presentations.
video
cam 1 100-105
I cam2 50-53
audio
micl
10-25
o
5
cam1 108-115
8
15
time
...
Figure
2:
Timeline view
of
content specification for a presentation of bicycling video with audio.
a measure of how well an actual presentation matches
the
ideal presentation of content on a view
and a
quality specification defines a minimum acceptable quality measure.
We
will refer to quality
when
we
mean the measure, and QOS when
we
mean the combination
of
content, view, and quality
specifications.
By allowing independent control of content, view and quality, a multimedia system can offer
a wider range of services
that
take advantage of the flexibility
of
computer platforms. To illustrate
these services, consider the presentation of video and audio as described in Figure
2.
The
first
video clip refers
to
5 seconds of a digital video file.
The
video file
is
named
caml
because it was
captured with
the
first of two cameras recording the same bicycle racing event. The digital video
for
caml
has a resolution
of
320x240 pixels. A second video file named cam2 shows another view
of
the bicycling event and has a higher resolution of 640x480 pixels.
The
video presentation cuts from
caml
to cam!! for 3 seconds, and then back to
caml
for the last 7 seconds.
The
audio clip
file
mid
contains a digital audio sound-track recorded
at
the
same time as
the
video clips. After selecting
this content for presentation, a user should be able to choose view parameters and quality levels
independently. For example,
if
the
user chooses a view with a 640x480 pixel display window,
but
a quality specification
that
requires only 320x240 pixels of resolution, then the player may be able
to avoid generating the full resolution images from cam2.
The
quality specification allows
the
user
to indirectly control resource usage independent of the content and view selections.
The
player can
optimize resource usage
so
long as the presentation exceeds the minimum quality specification. Users
might also like to specify an upper bound on cost for resource usage,
but
since cost
is
independent
of information loss, constraints on cost are beyond the scope of this paper.
4

Citations
More filters
01 Jan 2002
TL;DR: The main elements of languages that support QoS specifications are the constructors of QoS-aware models, the most common include extensions of Interface Description Languages, UML extensions and metamodels, and mathematical models.
Abstract: This paper introduces the main elements of languages that support QoS specifications. These elements are the constructors of QoS-aware models. Different types of languages are used to specify QoS systems, the most common include extensions of Interface Description Languages, UML extensions and metamodels, and mathematical models. These are different approach, although they use some common key elements. These QoS specification methods support the description of QoS concepts that are used for different purposes: i) generation of code for the management of QoS concepts (e.g., negotiation, access to resource managers), ii) specification of QoS-aware architectures, and iii) management of QoS information in QoS reflective infrastructures (e.g., QoS adaptable systems).
Book ChapterDOI
01 Jan 2002
TL;DR: This chapter summarizes research issues and the state-of-the-art technologies of MDBMSs from the perspective of multimedia presentations.
Abstract: Publisher Summary Multimedia computing and networking changes the style of interaction between computer and human. With the growth of the Internet, multimedia applications such as educational software, electronic commerce applications, and video games have brought a great impact on the way humans think of, use, and rely on computers/networks. One of the most important technologies to support these applications is distributed multimedia-database management system (MDBMS). This chapter summarizes research issues and the state-of-the-art technologies of MDBMSs from the perspective of multimedia presentations. Multimedia presentations are used widely in several forms, from instruction delivery to advertisement and electronic commerce, and in different software architectures, from a standalone computer to a local area networked computer and World Wide Web servers. These varieties of architectures result in different organization of MDBMSs.
Proceedings ArticleDOI
06 Jul 1999
TL;DR: A new Web documentation database is proposed as a supporting environment of the Multimedia Micro-University project and facilitates a Web documentation development paradigm, which allows the pre-broadcast of course materials.
Abstract: We propose a new Web documentation database as a supporting environment of the Multimedia Micro-University project. The design of this database facilitates a Web documentation development paradigm that we have proposed earlier. From a script description to its implementation as well as testing records, the database and its interface allow the user to design Web documents as virtual courses to be used in a Web-savvy virtual library. The database supports object reuse and sharing, as well as referential integrity and concurrence. In order to allow real-time course demonstration, we also propose a simple course distribution mechanism, which allows the pre-broadcast of course materials. The system is implemented as a three-tier architecture which runs under MS Windows and other platforms.
Book ChapterDOI
29 Jul 1996
TL;DR: A multi-agent system (MAT system) for integrated management of resource distribution is constructed and the case study overview is discussed in which the MAT is being used to assess the impact of the quality of presentation decline from the perceived QoS on student's learning process.
Abstract: Recent developments in distributed multimedia courseware technology and in the integration of a variety of media have the potential to change the nature of dissemination of learning material. The capability to deliver continuous media to the workstation and meet its real-time processing requirements is now recognised as the central element of future distributed multimedia courseware. In this process, user perception of quality of service plays an important role. Commonly, the quality of service (QoS) is expressed as a set of quantitative and qualitative parameters (media dimensions) that a multimedia presentation has to meet. Since people have different expectations we concluded that it is important to ensure that the courseware package meets the user perceived QoS rather than commonly defined QoS parameters. In order to investigate this hypothesis further, we have constructed a multi-agent system (MAT system) for integrated management of resource distribution. In MAT, the QoS is managed as one composite value — the user satisfaction with the quality of a multimedia presentation. In this paper, we present the overview of the MAT architecture. In addition, we discuss the case study overview in which the MAT is being used to assess the impact of the quality of presentation decline from the perceived QoS on student's learning process.
References
More filters
Journal ArticleDOI
TL;DR: The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.
Abstract: For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’’ compression, and a predictive method for “lossless’’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method.

3,944 citations


"A quality-of-service specification ..." refers background in this paper

  • ...Just as modern compression algorithms exploit knowledge of human perception [18], a multimedia system can better optimize playback resources if it knows which optimizations least affect perceived quality....

    [...]

Book
01 Jun 1992
TL;DR: Tutorial introduction background the Z language the mathematical tool-kit sequential systems syntax summary and how to use it to solve sequential systems problems.
Abstract: Tutorial introduction background the Z language the mathematical tool-kit sequential systems syntax summary.

3,547 citations

Journal ArticleDOI
TL;DR: Design of the MPEG algorithm presents a difficult challenge since quality requirements demand high compression that cannot be achieved with only intraframe coding, and the algorithm’s random access requirement is best satisfied with pure intraframes coding.
Abstract: The Moving Picture Experts Group (MPEG) standard addresses compression of video signals at approximately 1.5M-bits. MPEG is a generic standard and is independent of any particular applications. Applications of compressed video on digital storage media include asymmetric applications such as electronic publishing, games and entertainment. Symmetric applications of digital video include video mail, video conferencing, videotelephone and production of electronic publishing. Design of the MPEG algorithm presents a difficult challenge since quality requirements demand high compression that cannot be achieved with only intraframe coding. The algorithm’s random access requirement, however, is best satisfied with pure intraframe coding. MPEG uses predictive and interpolative coding techniques to answer this challenge. Extensive details are presented.

2,447 citations


"A quality-of-service specification ..." refers methods in this paper

  • ...Instead, lossy compression algorithms such as the MPEG encoding [7] are used to reduce the bandwidth requirements in exchange for some loss in quality....

    [...]

Proceedings ArticleDOI
15 May 1994
TL;DR: The authors designed a processor capacity reservation mechanism that isolates programs from the timing and execution characteristics of other programs in the same way that a memory protection system isolates them from outside memory accesses.
Abstract: Multimedia applications have timing requirements that cannot generally be satisfied using the time-sharing scheduling algorithms of general purpose operating systems. The authors provide the predictability of real-time systems while retaining the flexibility of a time-sharing system. They designed a processor capacity reservation mechanism that isolates programs from the timing and execution characteristics of other programs in the same way that a memory protection system isolates them from outside memory accesses. In the paper, they describe a scheduling framework that supports reservation and admission control, and introduce a novel reserve abstraction, specifically designed for the microkernel architecture, for measuring and controlling processor usage. The authors have implemented processor capacity reserves in Real-Time Mach, and they describe the performance of their system on several types of applications. >

451 citations


"A quality-of-service specification ..." refers background in this paper

  • ...While best-effort approaches offer only weak guarantees for synchronization, strong QOS guarantees for continuous media presentations have been demonstrated through reservation of processor, memory, network, and storage system resources [1, 3, 9, 11, 12, 19]....

    [...]

Journal ArticleDOI
TL;DR: This work uses simulation to compare different design choices in the Continuous Media File System, CMFS, and addresses several interrelated design issues; real-time semantics fo sessions, disk layout, an acceptance test for new sessions, and disk scheduling policy.
Abstract: The Continuous Media File System, CMFS, supports real-time storage and retrieval of continuous media data (digital audio and video) on disk. CMFS clients read or write files in “sessions,” each with a guaranteed minimum data rate. Multiple sessions, perhaps with different rates, and non-real-time access can proceed concurrently. CMFS addresses several interrelated design issues; real-time semantics fo sessions, disk layout, an acceptance test for new sessions, and disk scheduling policy. We use simulation to compare different design choices.

330 citations