scispace - formally typeset
Search or ask a question

Showing papers on "Queue management system published in 1994"


Patent
12 Jul 1994
TL;DR: In this article, an improved estimated waiting time arrangement is proposed to derive a more accurate estimate of how long a call that is or may be enqueued in a particular queue will have to wait before being serviced by an agent, by using the average rate of advance of calls through positions of the particular queue.
Abstract: In an automatic call distribution (ACD) system, an improved estimated waiting time arrangement derives a more accurate estimate of how long a call that is or may be enqueued in a particular queue will have to wait before being serviced by an agent, by using the average rate of advance of calls through positions of the particular queue. For a dequeued call, the arrangement determines the call's individual rate of advance from one queue position to the next toward the head of the queue. It then uses this individual rate to recompute a weighted average rate of advance through the queue derived from calls that preceded the last-dequeued call through the queue. To derive a particular call's estimated waiting time, the arrangement multiplies the present weighted average rate of advance by the particular call's position number in the queue. The arrangement may be called upon to update the derivation at any time before or while the call is in queue. Also, the arrangement performs the estimated waiting time derivation separately and individually for each separate queue. The arrangement advantageously takes into consideration the effect of ACD features that affect the estimated waiting time, including changes in the numbers of agents that are serving the queue due to agent login and logout, multiple split/skill queuing, agents with multiple skills or in multiple splits, priority queuing, interflow, intraflow, and call-abandonment rates.

210 citations


Patent
28 Apr 1994
TL;DR: In this article, a cell queuing circuit determines service states for queue channels according to bandwidth allocation parameters, such that a queue channel is in the serve-now state, a serve-ok state, and a no-serve state.
Abstract: A mechanism for buffering communication cells in a communication controller, wherein a cell queuing circuit provides a cell loss priority mechanism, and wherein the cell queuing circuit determines service states for queue channels according to bandwidth allocation parameters. The service states comprises a serve-now state, a serve-ok state, and a no-serve state, such that a queue channel is in the serve-now state if the queue channel must be serviced to maintain a minimum information rate parameter for the queue channel, the serve-ok state if the queue channel can be serviced and not exceed a peak information rate parameter for the queue channel.

148 citations


Patent
14 Dec 1994
TL;DR: In this article, a plurality of multifunction devices each having a facsimile function are connected together in a networking arrangement, and each device checks its queues to determine whether there are any current facsimiles jobs in the queues which would delay the transmission of the just-received job.
Abstract: A plurality of multifunction devices each having a facsimile function are connected together in a networking arrangement. Upon receipt of a job requiring a facsimile function, such a device initially checks its queues to determine whether there are any current facsimile jobs in the queues which would delay the transmission of the just-received job. If there are none, the job is transmitted in accordance with the device job priority control arrangement. If prior facsimile jobs are present in the queue, the device checks, via its network connection, other networked devices having facsimile capability from a preprogrammed list thereof, to determine whether any of the devices are not busy. In such a case, the job, with facsimile control instruction and data, is transferred to the non-busy facsimile for transmission.

124 citations


Patent
24 Aug 1994
TL;DR: In this paper, a method for temporarily storing data packets is provided, in which incoming data packets (D1, D2, D3) are distributed to and temporarily stored in two or more logic queues (QU1, QU2) on the basis of data (P1, P2) contained in the incoming data bytes.
Abstract: A method is provided for temporarily storing data packets, in which incoming data packets (D1, D2, D3) are distributed to and temporarily stored in two or more logic queues (QU1, QU2) on the basis of data (P1, P2) contained in the incoming data packets (D1, D2, D3), and in which all of the logic queues (QU1, QU2) share a common buffer memory (MEM) having locations that are dynamically allocated to the logic queues (QU1, QU2) only when required. The method features the steps of rejecting individual data packets (D1, D2, D3) if proper treatment is not ensured for all data packets, determining queue length data on the lengths of the logic queues (QU1, QU2), determining queue allocation data on which of the logic queue (QU1, QU2) an incoming data packet (D1, D2, D3) will be allocated, and selecting the incoming data packets (D1, D2, D3) to be rejected on the basis of the queue length data and the queue allocation data.

104 citations


Patent
28 Apr 1994
TL;DR: In this paper, a queue and search module is provided to select cells or packets for transmission based on these tags, enabling simple and fast implementations of a wide variety of scheduling algorithms, including algorithms for supporting communication traffic with real-time requirements, continuous media such as audio and video, and traffic requiring very fast response.
Abstract: A switch for digital communication networks includes a queuing system capable of implementing a broad class of scheduling algorithms for many different applications and purposes, with the queueing system including means for providing numerical tags to incoming cells or packets, the values of the tags being calculated when incoming cells or packets arrive at the switch. A queue and search module is provided to select cells or packets for transmission based on these tags. The combination of the tags and the queue and search module enables simple and fast implementations of a wide variety of scheduling algorithms, including algorithms for supporting communication traffic with real time requirements, continuous media such as audio and video, and traffic requiring very fast response. Furthermore, multiple classes of traffic are supported in a single network switch, each class having its own scheduling algorithm and policy. The queue and search module is designed for VLSI implementation, and in one embodiment supports an ATM switch with 16 ports, each port operating at 622 megabits per second.

100 citations


Patent
02 Dec 1994
TL;DR: In this paper, a system for queueing and selective pushout and method for a packet communications module such as a shared memory asynchronous transfer mode (ATM) switch is described.
Abstract: A system for queueing and selective pushout and method are disclosed for a packet communications module such as a shared memory asynchronous transfer mode (ATM) switch. The shared memory stores packets in queues, each packet having a field and at most two pointers. Within each queue, the packets having respective space priorities are stored in subqueues each having the respective space priorities. The packets are stored in these priority subqueues using a first pointer pointing to the next packet of the same space priority in the queue. The second pointer associated with a stored packet points to the previous packet of greater than or equal space priority in the FIFO order in the queue. The field of a packet is used to store the priority value corresponding to the next packet in FIFO order in the queue, and this field is used by a processor to decide priority sub-queues to serve next. The packets are stored in the queues in a FIFO order using the two pointers and the fields of the packets. The processor controls the selective pushout to push out a packet and uses the two pointers and the fields of the packets to restore the FIFO order. A method is also disclosed including the steps of storing packets in a queue, with each of the queued packets associated with the two pointers and a field; serving the queue; pushing out packets from the queue; and maintaining queue-lengths and a state information table.

94 citations


Journal ArticleDOI
TL;DR: The paper introduces the notion of cell-blocking, wherein a fuzzy thresholding function, based on Zadeh's (1965) fuzzy set theory, is utilized to deliberately refuse entry to a fraction of incoming cells from other switches.
Abstract: High-performance cell-based communications networks have been conceived to carry asynchronous traffic sources and support a continuum of transport rates ranging from low bit-rate to high bit-rate traffic. When a number of bursty traffic sources add cells, the network is inevitably subject to congestion. Traditional approaches to congestion management include admission control algorithms, smoothing functions, and the use of finite-sized buffers with queue management techniques. Most queue management schemes, reported in the literature, utilize "fixed" thresholds to determine when to permit or refuse entry of cells into the buffer. The aim is to achieve a desired tradeoff between the number of cells carried through the network, propagation delays of the cells, and the number of discarded cells. While binary thresholds are excessively restrictive, the rationale underlying the use of a large number of priorities appears to be ad hoc, unnatural, and unclear. The paper introduces the notion of cell-blocking, wherein a fuzzy thresholding function, based on Zadeh's (1965) fuzzy set theory, is utilized to deliberately refuse entry to a fraction of incoming cells from other switches. The blocked cells must be rerouted by the sending switch to other switches and, in the process, they may incur delays. The fraction of blocked cells is a continuous function of the current buffer occupancy level unlike the abrupt. The fuzzy cell-blocking scheme is simulated on a computer. Fuzzy queue management adapts superbly to sharp changes in cell arrival rates and maximum burstiness of bursty traffic sources. >

82 citations


Journal ArticleDOI
TL;DR: Simple queues with Poisson input and exponential service times are considered to illustrate how well-suited Bayesian methods are used to handle the common inferential aims that appear when dealing with queue problems.
Abstract: Simple queues with Poisson input and exponential service times are considered to illustrate how well-suited Bayesian methods are used to handle the common inferential aims that appear when dealing with queue problems. The emphasis will mainly be placed on prediction; in particular, we study the predictive distribution of usual measures of effectiveness in anM/M/1 queue system, such as the number of customers in the queue and in the system, the waiting time in the queue and in the system, the length of an idle period and the length of a busy period.

75 citations


Patent
Jeffery L. Swarts1, Gary L. Rouse1
22 Aug 1994
TL;DR: A queue pointer manager contained in an integrated data controller is capable of controlling high speed data transfers between a high speed controlled data channel, a local processor bus and a dedicated local data bus as mentioned in this paper.
Abstract: A queue pointer manager contained in an integrated data controller is capable of controlling high speed data transfers between a high speed controlled data channel, a local processor bus and a dedicated local data bus. The overall design utilizes enhanced features of the Micro Channel architecture and data buffering to achieve maximum burst rates of 80 megabytes and to allow communications with 8, 16, 32 and 64 bit Micro Channel devices. Queued demands allow flexible programming of the Micro Channel master operations and reporting of completion statuses. The hardware control of command and status queuing functions increases the processing speed of control operations and reduces the need for software queuing. Extensive error checking/reporting, programming parameters, internal wrap self-test capability give the integrated data controller advanced functions as an input/output processor. The queue pointer manager also manages queue read and write pointers.

67 citations


Journal ArticleDOI
TL;DR: The rate of growth of the number of customers in the queue as well as the asymptotic behavior of the residual service times described in terms of a renormalized point process are given.
Abstract: We analyze the transient behavior of the single server queue under the processor sharing discipline. Under fairly general assumptions, we give the rate of growth of the number of customers in the queue as well as the asymptotic behavior of the residual service times described in terms of a renormalized point process.

55 citations


Patent
Patel Bipin1, Jeanne Ichnowski1, Mark E Kaminsky1, Roberto Perelman1, Chris Yuan1 
24 Jan 1994
TL;DR: In this article, a queue manager allocates and reallocates a number of processing queues, less than the number of client types, to match different one of said client types.
Abstract: In a queue management system for servicing of a number of clients representing different client types, a controlling queue queues clients in a predetermined order. A queue manager allocates and reallocates a number of processing queues, less than the number of client types, to match different one of said client types. The queue manager then places successive ones of the clients in the controlling queue into a processing queue matching the client type if there is a matching processing queue and allocates or reallocates an empty or emptied processing queue to the client type if there is no matching processing queue but there is an empty processing queue. A server empties the processing queues in batches. In the environment of a telephone system the clients are messages and the client types are codings in the messages for various destinations. The queue manager dedicates each of a number of processing queues to one of the destinations in the controlling queue, accesses the top message in the controlling queue, places the messages in a processing queue matching the destination code of the message if there is a match, and dedicates an empty processing queue to the target destination if there is no matching processing queue but there is an empty processing queue.

Journal ArticleDOI
TL;DR: A criterion for the validity of a geometric product form equilibrium distribution is given for these extended networks by allowing customer arrivals to the network, or the transfer between queues of a single positive customer in the network to trigger the creation of a batch of negative customers at the destination queue.
Abstract: Gelenbe et al. [1, 2] consider single server Jackson networks of queues which contain both positive and negative customers. A negative customer arriving to a nonempty queue causes the number of customers in that queue to decrease by one, and has no effect on an empty queue, whereas a positive customer arriving at a queue will always increase the queue length by one. Gelenbe et al. show that a geometric product form equilibrium distribution prevails for this network. Applications for these types of networks can be found in systems incorporating resource allocations and in the modelling of decision making algorithms, neural networks and communications protocols.

Patent
22 Dec 1994
TL;DR: In this article, a shared memory queue is used for interprocess communication between concurrently executing, cooperating sequential processes in a digital computer system using hierarchical queuing, which allows a sending process to collect multiple message segments as entries in a local sub-queue, which is enqueued as a single entity to the shared memory when all message segments are present.
Abstract: A system and method for interprocess communication between concurrently executing, cooperating sequential processes in a digital computer system uses a shared memory queue as a mechanism for message passing and process synchronization. Data to be transferred from a sending process to a receiving process is stored in a queue entry on the shared memory queue. Hierarchical queuing allows a sending process to collect multiple message segments as entries in a local sub-queue, which is enqueued as a single entity to the shared memory queue when all message segments are present. The receiving process dequeues the sub-queue in one operation, thereby increasing the efficiency of message transfer while preventing the erroneous dequeuing of message segments when multiple receiving processes are waiting on the same shared memory queue. In this manner, the logical maximum size of a message being passed between processes is expanded.

Proceedings ArticleDOI
01 May 1994
TL;DR: The present authors' queue manager treats multiple delay and loss priorities simultaneously and a cell discarding strategy, called push out which allows the buffer to be completely shared by all service classes, has been adopted in the queue manager.
Abstract: The asynchronous transfer mode (ATM) technique has been widely accepted as a flexible and effective scheme to transport various traffic over the future broadband network. To fully utilize network resources while still providing satisfactory quality of service (QOS) to all network users, prioritizing users' traffic according to their service requirements becomes necessary. During the call setup, each service can be assigned a service class determined by a delay priority and a loss priority. A queue manager in ATM network nodes will schedule ATM cells' departing and discarding sequence based on their delay and loss priorities. Most queue management schemes that have been proposed so far only consider either one of these two priority types. The present authors' queue manager treats multiple delay and loss priorities simultaneously. Moreover, a cell discarding strategy, called push out which allows the buffer to be completely shared by all service classes, has been adopted in the queue manager. the authors propose a practical architecture to implement the queue manager, facilitated by a new VLSI chip, an enhanced version of the existing sequencer VLSI chip. >

Patent
13 Jun 1994
TL;DR: In this paper, a processing system for generating memory requests at a first clock rate and outputting data at a second clock rate is presented, where a memory system stores and retrieves data in response to the memory requests, and the memory system outputs data to read requests received from input queuing circuitry.
Abstract: A processing system is provided which includes circuitry for generating memory requests at a first clock rate. Input queuing circuitry which includes at least one queue receives the memory requests from the circuitry at the first clock rate and outputs such memory requests at a second clock rate. A memory system stores and retrieves data in response to the memory requests, the memory system outputting data in response to read requests received from input queuing circuitry. An output queue is provided which receives data output from memory at the second clock rate and outputs such data at the first clock rate. Queuing control circuitry is provided which prevents overflow of output queue by controlling the number of memory requests sent in bursts from the input queuing system to the memory system and by controlling the wait time between such bursts.

Patent
22 Feb 1994
TL;DR: A queue memory system as discussed by the authors provides a flexible memory transfer system which uses a single transaction to either store a memory value in a queue or to retrieve the memory value from the queue.
Abstract: A queue memory system (10) provides a flexible memory transfer system which uses a single transaction to either store a memory value in a queue or to retrieve the memory value from the queue. A queue controller (20) controls the transfer of data between a queue memory (18) and the peripheral devices (22, 24). The queue controller generally includes a register (52, 62) which indicates an address to be accessed and a direction control signal. Additionally, each peripheral device has a queue control register which is configured to access a selected channel of the queue memory. The queue memory system described herein also efficiently uses the cycle time of a central processing unit (12) of the system to perform queue accesses without disrupting more general processing steps. For example, the queue memory system will wait (for up to thirty-two timing cycles) for a timing cycle in which the central processing unit does not require use of a bus. At that time, the queue memory system will transfer data between the queue and a peripheral device. An alternate option is to force the central processing unit to freeze operation so that data will be transferred immediately.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the limiting behaviour of the queue length, sojourn time and random measures describing attained and residual processing times of customers present in GI/G/1 processor sharing queues with traffic intensity tending to 1.
Abstract: Consider GI/G/1 processor sharing queues with traffic intensity tending to 1. Using the theory of random measures and the theory of branching processes we investigate the limiting behaviour of the queue length, sojourn time and random measures describing attained and residual processing times of customers present.

Patent
30 Dec 1994
TL;DR: In this article, a shared queue is provided to allow any of a plurality of systems to process messages received by clients of a data processing environment, where a received message is enqueued onto the shared queue.
Abstract: A shared queue is provided to allow any of a plurality of systems to process messages received by clients of a data processing environment. A received message is enqueued onto the shared queue. Any of the plurality of systems having available processing capacity can retrieve the message from the shared queue and process the message. A response to the message, where appropriate, is enqueued onto the shared queue for delivery back to the client. A unique list structure is provided to implement the queue. The list structure is comprised of a plurality of sublists, or queue types. Each queue type is divided into a plurality of list headers. List entries, containing data from the received messages, are chained off of the list headers. A common queue server is used to interface to the queue and to store messages thereon. The common queue server stores message data in storage buffers, and then transfers this data to the list entries. Thus, common queue server coordinates the enqueuing of data onto the shared queue.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a finite capacity queuing system in which arrivals are governed by a Markovian arrival process and obtained numerically stable expressions for the steady-state queue length densities at arrivals and at arbitrary time points, and the Laplace-Stieltjes transform of the stationary waiting time distribution of an admitted customer at points of arrivals.
Abstract: In this paper we consider a finite capacity queuing system in which arrivals are governed by a Markovian arrival process. The system is attended by two exponential servers, who offer services in groups of varying sizes. The service rates may depend on the number of customers in service. Using Markov theory, we study this finite capacity queuing model in detail by obtaining numerically stable expressions for (a) the steady-state queue length densities at arrivals and at arbitrary time points; (b) the Laplace-Stieltjes transform of the stationary waiting time distribution of an admitted customer at points of arrivals. The stationary waiting time distribution is shown to be of phase type when the interarrival times are of phase type. Efficient algorithmic procedures for computing the steady-state queue length densities and other system performance measures are discussed. A conjecture on the nature of the mean waiting time is proposed. Some illustrative numerical examples are presented.

Journal ArticleDOI
TL;DR: Conditions under which the smart customer can lower its expected sojourn time in the system by waiting and observing rather than immediately joining the shortest queue are found.
Abstract: We consider a queueing system with two servers, each with its own queue. The interarrival times are generally distributed. The service time for each server is exponentially distributed but the rates may be different. No jockeying between the two queues is allowed. We consider situations in which a smart customer can delay joining a queue until some arrivals or service departures have been observed. All other customers join the shortest queue. We find conditions under which the smart customer can lower its expected sojourn time in the system by waiting and observing rather than immediately joining the shortest queue.

Journal ArticleDOI
TL;DR: In this article, the authors complemented two previous studies by indicating the extent to which characteristics of a general stationary point process taken as the arrival process of a single-server queue influence light traffic limit theorems for the two essentially distinct schemes of dilation and thinning as routes to the limit.

Patent
13 Dec 1994
TL;DR: In this paper, a messaging facility is described that enables the passing of packets of data from one processing element to another in a globally addressable, distributed memory multiprocessor without having an explicit destination address in the target processing elements memory.
Abstract: A messaging facility is described that enables the passing of packets of data from one processing element to another in a globally addressable, distributed memory multiprocessor without having an explicit destination address in the target processing elements memory. A message is a special cache-line-size write that has as its destination a pre-defined queue area in the memory of the receiving processing element. Arriving messages are placed in the queue in the order that they appear at the node by hardware queue management mechanisms. Flow control between processors is usually accomplished by the queue management hardware, with software intervention necessary to deal with the error cases caused by queue overflows, etc.

Proceedings ArticleDOI
P. Landsberg1, C. Zukowski
12 Jun 1994
TL;DR: A VLSI generic queue scheduler that can emulate a wide variety of queue scheduling methods is proposed that is implemented using a VLSi system synthesis tool, OASIS, and through measurements explore the size and speed of various system configurations.
Abstract: Quality of service and network controllability are issues that have yet to be clearly defined for broadband packet switching networks. One method of influencing the quality of service and the controllability of packet switched networks is queue scheduling. Unfortunately, defining an optimal queue scheduling method is highly dependent upon the cost function chosen for a particular switch implementation. Therefore, the authors propose a VLSI generic queue scheduler that can emulate a wide variety of queue scheduling methods. They implement the VLSI generic queue scheduler using a VLSI system synthesis tool, OASIS, and through measurements explore the size and speed of various system configurations. The process of employing some scheduling methods in this architecture is also detailed. >

Patent
Adrian L. Carbine1, Gary L. Brown1, Bradley D. Hoyt1, Donald D. Parker1, Raghavan Kumar1 
01 Mar 1994
TL;DR: In this article, a split queue system for a decoder that supplies one or more micro-operations and data associated with the micro-operation is presented. But the main queue is coupled to a shadow queue, which is used to receive data associated to the micro operation, in the same cycle that the microoperation is supplied to the main one.
Abstract: A split queue system for a decoder that supplies one or more micro-operations and data associated with the micro-operations. A main queue is coupled to receive one or more micro-operations from the decoder, and supply it to a next processing stage to provide a process micro-operation. A shadow queue is coupled to receive data associated with the micro-operation, in the same cycle that the micro-operation is supplied to the main queue. A control circuit is coupled to the main queue for issuing micro-operation from the main queue into the next processing stage in a first cycle, and in a second cycle issuing, the micro-operation therefrom. Also in the second cycle, the control circuit issues the data associated with the micro-operation from the shadow queue, so that the data is synchronized with its associated processed micro-operation.

Journal ArticleDOI
TL;DR: This work proposes an algorithm that converges to the optimum, while inducing users to reveal information relating to their benefits and costs truthfully, and balances the manager's budget.
Abstract: The following problem is considered. There are several users who send jobs to an M/M/1 queue and have privately observed information relating to their benefits from the rate of job submissions and their costs due to waiting in the queue. Each user's benefits and costs are unknown to the queue manager and to other users. The manager's objective is to achieve "optimal" flow control, where the optimality depends on arriving at an appropriate trade-off between delay and the job arrival rate assigned to each user: the allocations should be such that no user can be made better off by a reallocation without hurting at least one other user. Since the optimality calculation requires knowledge of the users' private information, we propose an algorithm that converges to the optimum, while inducing users to reveal information relating to their benefits and costs truthfully, and balances the manager's budget. Earlier work on this problem has produced a flow control algorithm that requires the queue manager to incur a potentially huge deficit; this leads to several theoretical and practical problems. >


01 Jan 1994
TL;DR: A general purpose queue architecture for an ATM switch capable of supporting both real-time and non-real-time communication and programmable because the tag calculations are done in microprocessors at the interface to each physical link connected to the switch.
Abstract: This paper describes a general purpose queue architecture for an ATM switch capable of supporting both real-time and non-real-time communication. The central part of the architecture is a kind of searchable, self-timed FIFO circuit into which are merged the queues of all virtual channels in the switch. Arriving cells are tagged with numerical values indicating the priorities, deadlines, or other characterizations of the order of transmission, then they are inserted into the queue. Entries are selected from the queue both by destination and by tag, with the earliest entry being selected from among a set of equals. By this means, the switch can schedule virtual channels independently, but without maintaining separate queues for each one. This architecture supports a broad class of scheduling algorithms at ATM speeds, so that guaranteed qualities of service can be provided to real-time applications. It is programmable because the tag calculations are done in microprocessors at the interface to each physical link connected to the switch.

Book ChapterDOI
TL;DR: The performance of Token Ring Protocols with constraints on the cycle times is analyzed, which approximates both the FDDI protocol (Fiber Distributed Data Interface) and the IEEE 802.4 Token Bus Standard.
Abstract: We analyze in this paper the performance of Token Ring Protocols with constraints on the cycle times. Each station may have a different cycle time constraint, and a different number of buffers. We consider two types of service discipline in the different stations: (i) the 1-limited case, where at most one packet may be transmitted from each queue at each visit of the token, provided that the cycle time constraints are satisfied, and (ii) the exhaustive service discipline where in each queue, packets are transmitted till either the queue empties or the cycle time constraint in that queue is violated. The system we analyze approximates both the FDDI protocol (Fiber Distributed Data Interface) and the IEEE 802.4 Token Bus Standard. Based on a power-series algorithm, we obtain the expected throughput and delay in every station, as well as the first two moments of the queues' length.

Journal ArticleDOI
TL;DR: In this article, a general single-server bulk queueing system with a server waiting until the queue will reach level before it starts processing customers is considered, where the input stream is assumed to be a compound Poisson process modulated by a semi-Markov process and a multilevel control of service time.
Abstract: This article deals with a general single-server bulk queueing system with a server waiting until the queue will reach level before it starts processing customers. If at least r customers are available the server takes a batch of the fixed size of units for service. The input stream is assumed to be a compound Poisson process modulated by a semi-Markov process and with a multilevel control of service time.

DOI
22 Sep 1994
TL;DR: The paper identifies a number of issues that are believed to be important for hardware/software codesign and suggests a combined hardware/ software realization of the priority queue, a data structure with a simple interface which in many applications is a performance bottleneck.
Abstract: The paper identifies a number of issues that are believed to be important for hardware/software codesign. The issues are illustrated by a small comprehensible example: a priority queue. Based on simulations of a real application, we suggest a combined hardware/software realization of the priority queue. A priority queue is a data structure with a simple interface which in many applications is a performance bottleneck. >