scispace - formally typeset
Search or ask a question

Showing papers by "Arif Ghafoor published in 1993"


Journal ArticleDOI
TL;DR: N-ary and reverse temporal relations are introduced and defined along with their temporal constraints to ensure a property of monotonically increasing playout deadlines to facilitate both real-time deadline-driven playout scheduling or optimistic interval-based process playout.
Abstract: Multimedia data often have time dependencies that must be satisfied at presentation time. To support a general-purpose multimedia information system, these timing relationships must be managed to provide utility to both the data presentation system and the multimedia author. New conceptual models for capturing these timing relationships, and managing them as part of a database are proposed. Specifically, n-ary and reverse temporal relations are introduced and defined along with their temporal constraints. These new relations are a generalization of earlier temporal models and establish the basis for conceptual database structures and temporal access control algorithms to facilitate forward, reverse, and partial-interval evaluation during multimedia object playout. The proposed relations are defined to ensure a property of monotonically increasing playout deadlines to facilitate both real-time deadline-driven playout scheduling or optimistic interval-based process playout. A translation of the conceptual models to a structure suitable for a relational database is presented. >

323 citations


Journal ArticleDOI
Arif Ghafoor1, J. Yang1
TL;DR: A general management framework for distributed heterogenous supercomputing systems (DHSSs) that is based on an application-characterization technique is presented, and the use of network caching to reduce the complexity associated with the scheduling decisions is discussed.
Abstract: A general management framework for distributed heterogenous supercomputing systems (DHSSs) that is based on an application-characterization technique is presented. The technique uses code profiling and analytical benchmarking of supercomputers. An optimal scheduling of tasks in these systems is an NP-complete problem. The use of network caching to reduce the complexity associated with the scheduling decisions is discussed. An experimental prototype of a DHSS management system is described. >

130 citations


Proceedings ArticleDOI
01 Sep 1993
TL;DR: This model specifies all the necessary functions for the communication and presentation of multimedia objects and implements several synchronization schemes in this model that provide mechanisms to handle synchronization anomalies during the presentation ofimedia objects.
Abstract: As the demand for multimedia information systems increases and network technology provides the capability to handle multimedia data, effective communication and synchronization mechanisms are needed in a distributed environment to realize the successful retrieval, composition and presentation of multimedia objects. In this paper, we propose a specification model for the communication and synchronization of multimedia objects in a distributed environment. This model specifies all the necessary functions for the communication and presentation of multimedia objects. We also implement several synchronization schemes in our model that provide mechanisms to handle synchronization anomalies during the presentation of multimedia objects.

92 citations


Proceedings ArticleDOI
16 Aug 1993
TL;DR: This paper proposes two techniques, called augmented code profiling and augmented analytical benchmarking, to characterize applications and archi tectures in a DHSS, respectively and provide a detailed architectural-dependent characterization of DHSS applications.
Abstract: For managing tasks efficiently in a Distributed Het erogeneous Supercomputing System (DHSS) , we re quire a thorough understanding of applications and their intelligent scheduling within the system. For this purpose, an accurate estimation of the execu tion time of applications on various architectures is needed. In this paper we present a framework to ad dress this issue. We propose two techniques, called augmented code profiling and augmented analytical benchmarking, to characterize applications and archi tectures in a DHSS, respectively. These techniques are based on code profiling and analytical benchmarking, respectively and provide a detailed architectural-dependent characterization of DHSS applications.

58 citations


Book ChapterDOI
01 Jan 1993
TL;DR: This work has discussed various conceptual models to specify one key requirement for multimedia databases, which is the process of temporal synchronization, and has highlighted their capabilities and limitations.
Abstract: One of the requirements of multimedia database systems is that they will need a data model more powerful than the relational model, without compromising its advantages. The relational data model exhibits limitations in terms of complex object support, type system, and object management. To address these issues, we have emphasized one key requirement for multimedia databases, which is the process of temporal synchronization. We have discussed various conceptual models to specify this requirement and have highlighted their capabilities and limitations. Still, a number of other issues need to be faced before multimedia database systems become a reality.

10 citations


Proceedings ArticleDOI
TL;DR: A top-down data placement methodology for a large intertive muliimedia information system (MMIS) on a single spindle multi-disk environment such as a Jukebox is presented, and various clustering and storage layout algxithms are proposed.
Abstract: In this paper, a top-down data placement methodology for a large intertive muliimedia information system (MMIS) on a single spindle multi-disk environment such as a Jukebox is presented. The objective of this work is to minimize aveiage disk seek time as well as the number of platter switehes fcw Jukebox. A large data placement problem can be divided into a number of small data placement problems by weighted graph decomposition. The Kernighan-Lin partitioning algorithm is recursively applied for this jiirpoac. Once the graph is fully partitioned, the objects in the same subgraph are assigned to the same disk. The data placement within a disk is divided into two stages, global data placement and detailed data placement. The expected access patterns of global data placement are modeled as a time-homogeneous ergodic Markov Chain, from which the stationary probability for each node of the browsing graph can be found. Based on these probabilities, we define an expected access cost Then, the problem of global data placement is posed as an optimization problem, and various clustering and storage layout algxithms are proposed.

4 citations


01 Jan 1993
TL;DR: A framework for evaluating performance of scheduling preorchestrated multimedia information over broadband integrated networks is presented and a set of Quality Of Presentation parameters which quantify the quality of multimedia presentation from user's point of view are proposed.
Abstract: In this report we present a framework for evaluating performance of scheduling preorchestrated multimedia information over broadband integrated networks. We propose a set of Quality Of Presentation (QOP) parameters which quantify the quality of multimedia presentation from user's point of view. The communication of mulltimedia data in a networked environment can affect the desired QOP parameters due to jitter delays in the network. We evaluate trade-offs between QOP parameters and the system resources including channel utilization and buffering at the destination. These trade-offs can be used to develope an optimal transmission schedule for multimedia information. Key .words: multimedia communication, multimedia presentation, temporal synchronization, pre-orchestrated multimedia information, quality of presentation, fluid flow models, timed Petri-nets.

1 citations


Proceedings ArticleDOI
Arif Ghafoor1
16 Aug 1993
TL;DR: This work proposes a general technique to study the perfor mance of random routing in a hypercube system based on association schemes in stead of using representation theory.
Abstract: We propose a general technique to study the perfor mance of random routing in a hypercube system. The proposed method is a based on association schemes in stead of using representation theory.

1 citations