scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 1998"


Proceedings ArticleDOI
09 Aug 1998
TL;DR: In this paper, a system for wireless networking utilizing code division multiple access (CDMA), in conjunction with spread spectrum (SS) modulation is presented, which is capable of simultaneously providing high bandwidth and low latency communications.
Abstract: Ad-hoc wireless networking presents challenges that are different from those of tethered networks in several significant ways. In addition to high error rates and constantly varying channels, mobile communication imposes new constraints, including limited energy supplies, and the need for portability. A system for wireless networking utilizing code division multiple access (CDMA), in conjunction with spread spectrum (SS) modulation is presented. By combining SS, automatic power control and local coordination, a "collisionless," energy and spectrum efficient system can be created which is capable of simultaneously providing high bandwidth and low latency communications. A new routing method, minimum consumed energy routing is evaluated. This new method is shown to reduce latency by 75%, reduce power consumption by 15%, and avoid congestion, in comparison with minimum transmitted energy routing. A simulator, SSNetSim, was developed to simulate the performance of these networks. By taking into account factors such as station placement, traffic patterns, routing strategies, and path loss, network performance, in terms of SNR, throughput, latency, and power consumption, is computed.

223 citations


Journal ArticleDOI
03 Dec 1998-Nature
TL;DR: It has been proposed that the perceived position of a moving object is extrapolated forwards in time to compensate for the delay in visual processing.
Abstract: The time it takes to transmit information along the human visual pathways introduces a substantial delay in the processing of images that fall on the retina. This visual latency might be expected to cause a moving object to be perceived at a position behind its actual one, disrupting the accuracy of visually guided motor actions such as catching or hitting, but this does not happen. It has been proposed that the perceived position of a moving object is extrapolated forwards in time to compensate for the delay in visual processing1,2,3.

214 citations


Patent
Craig A. Link1, Hoon Im1
23 Apr 1998
TL;DR: In this article, a method and system for determining network latency between clients in a computer network, such as in a gaming zone environment, is presented, where a first client places first time information such as a timestamp into a (ping) data packet and sends the packet to the second client, who places second time information into the packet, and send the packet as a response packet back to the first client.
Abstract: A method and system for determining network latency between clients in a computer network, such as in a gaming zone environment. Each client determines the network latency between each other client via a ping, response, and response-response protocol. To this end, a first client places first time information such as a timestamp into a (ping) data packet and sends the packet to the second client, who places second time information into the packet, and sends the packet as a response packet back to the first client. The first client determines a first network latency based on its current time and the first time information returned in the response packet. The first client then sends the packet back to the second client as a response to the response packet. The second client determines a second latency based on the current time information at the second client and the second time information received in the response-response packet. For multiple clients such as in a gaming zone environment, each local client sorts the IP addresses of the other remote clients into sets of clients, and pings the remote client or clients in each set once per predetermined period, thereby distributing the pinging operation to balance incoming and outgoing network traffic.

206 citations


Proceedings ArticleDOI
11 Jun 1998
TL;DR: A family of semi-dynamic and dynamic edge-triggered flip-flops to be used with static and dynamic circuits, respectively, used in the UltraSPARC-III microprocessor.
Abstract: Describes a family of semi-dynamic and dynamic edge-triggered flip-flops to be used with static and dynamic circuits, respectively. The flip-flops provide both short latency and the capability of incorporating logic functions with minimum delay penalty, properties which make them very attractive for high-performance microprocessor design. The circuits described are used in the UltraSPARC-III microprocessor.

192 citations


Book ChapterDOI
TL;DR: Although VZV is a member of the α-herpesvirus family, it appears that its program of latency is unique with respect to HSV and BHV-1.
Abstract: Publisher Summary Herpesviruses have been identified that infect nearly all groups of vertebrates This chapter focuses on the latency of α-herpesviruses Several mammalian viruses belong to this group: equine herpes virus 1( EHV-l ), pseudorabies virus (PRV), bovine herpes virus 1 (BHV-1), herpes simplex virus type 1 (HSV-1), herpes simplex virus type 2 (HSV-2), and varicella zoster virus (VZV) Although most latency studies have been performed using HSV-1, significant contributions have been made using the animal viruses, and thus studies related to BHV-1 are included in this chapter In general, it is believed that sensory neurons within ganglia are the primary site of latency In latently infected sensory neurons, the only abundant viral gene product that is transcribed is LAT (latency-associated transcript; HSV-1 or HSV-2) or LRT (latencyrelated transcript; BHV-1) Consequently, it has been hypothesized that LAT or LRT regulates some aspect of latency Although VZV is a member of the α-herpesvirus family, it appears that its program of latency is unique with respect to HSV and BHV-1 VZV is present in many sensory ganglia throughout the body and the central nervous system

186 citations



Proceedings ArticleDOI
01 Nov 1998
TL;DR: The authors' measurements reveal that to produce IPC values within 8% of the ideal memory system, between 1% and 62% of loads need to be satisfied within a single cycle and that up to 84% can be satisfied in as many as 32 cycles, depending on the benchmark and processor configuration.
Abstract: This paper provides quantitative measurements of load latency tolerance in a dynamically scheduled processor. To determine the latency tolerance of each memory load operation, our simulations use flexible load completion policies instead of a fixed memory hierarchy that dictates the latency. Although our policies delay load completion as long as possible, they produce performance (instructions committed per cycle (IPC)) comparable to an ideal memory system where all loads complete in one cycle. Our measurements reveal that to produce IPC values within 8% of the ideal memory system, between 1% and 62% of loads need to be satisfied within a single cycle and that up to 84% can be satisfied in as many as 32 cycles, depending on the benchmark and processor configuration. Load latency tolerance is largely determined by whether an unpredictable branch is in the load's data dependence graph and the depth of the dependence graph. Our results also show that up to 36% of all loads miss in the level one cache yet have latency demands lower than second level cache access times. We also show that up to 37% of loads hit in the level one cache even though they possess enough latency tolerance to be satisfied by lower levels of the memory hierarchy.

92 citations


Patent
22 Dec 1998
TL;DR: In this paper, the authors present a system and method for managing multiple frame buffers, which reduces the risk of dropped frames by estimating a latency of a frame that is yet to be rendered and determining whether the latency is greater than a target latency.
Abstract: A system and method for managing multiple frame buffers. The system includes multiple frame buffers, and thus reduces the risk of dropped frames. The system controls and bounds render-to-display latency, and provides an application friendly and effective interface to the frame buffers. The system operates by estimating a latency of a frame that is yet to be rendered. The system determines whether the latency is greater than a target latency. If the latency is greater than the target latency, then the system blocks the application that is responsible for rendering the frame before rendering of the frame commences. As a result, render-to-display latency is bounded to the target latency. The system addresses the naming issue by providing the application with access to only the front buffer and the back buffer. In particular, the present system maintains a queue of one or more frame buffers. The newest frame buffer appended to the queue is considered to be the front buffer. The oldest frame buffer in the queue is displayed. A frame buffer not in the queue is considered to be the back buffer. Rendering is enabled to the back buffer. Once rendering to the back buffer is complete, the back buffer is appended to the queue and becomes the new front buffer.

91 citations


Journal ArticleDOI
TL;DR: Following infection, herpes simplex virus establishes latency in the nervous system and recurrences of lytic replication occur periodically.

59 citations


Patent
12 Mar 1998
TL;DR: In this paper, data mapping requests from any one of the multiprocessor CPUs such that the information requested is acquired through the crossbar switch from the same memory module to which the "victim" data in that CPUs cache must be rewritten.
Abstract: Data coherency in a multiprocessor system is improved and data latency minimized through the use of data mapping "fill" requests from any one of the multiprocessor CPUs such that the information requested is acquired through the crossbar switch from the same memory module to which the "victim" data in that CPUs cache must be rewritten. With such an arrangement rewrite latency periods for victim data within the crossbar switch is minimized and the `ships crossing in the night` problem is avoided.

56 citations


Patent
02 Mar 1998
TL;DR: In this article, a digital programmable delay which provides a series of pulses that are programmed in both pulse latency and trigger latency to control the operation of a memory module test system is presented.
Abstract: A digital programmable delay which provides a series of pulses that are programmed in both pulse latency and trigger latency to control the operation of a memory module test system.

Journal ArticleDOI
TL;DR: This work develops maximum likelihood and least squares estimators of stimulus response latency and presents a comparison of the performance of these methods with estimators commonly used in the neuroscience literature.

Patent
14 Dec 1998
TL;DR: A software agent is a functional part of a user-interactive software application running on a data processing system, which creates a userperceptible effect in order to mask latency present in delivery of data to the user as discussed by the authors.
Abstract: A software agent is a functional part of a user-interactive software application running on a data processing system. The agent creates a user-perceptible effect in order to mask latency present in delivery of data to the user. The agent creates the effect employing cinematographic techniques.

Proceedings ArticleDOI
01 Jun 1998
TL;DR: An abstract pipeline model is developed that reveals a crucial performance tradeoff involving the effects of the overhead of the bottleneck stage and the bandwidth of the remaining stages and is exploited to develop a suite of fragmentation algorithms designed to minimize message latency.
Abstract: In this paper, we study how to minimize the latency of a message through a network that consists of a number of store-and-forward stages. This research is especially relevant for today's low overhead communication systems that employ dedicated processing elements for protocol processing. We develop an abstract pipeline model that reveals a crucial performance tradeoff involving the effects of the overhead of the bottleneck stage and the bandwidth of the remaining stages. We exploit this tradeoff to develop a suite of fragmentation algorithms designed to minimize message latency. We also provide an experimental methodology that enables the construction of customized pipeline algorithms that can adapt to the specific system characteristics and application workloads. By applying this methodology to the Myrinet-GAM system, we have improved its latency by up to 51%. Our theoretical framework is also applicable to pipelined systems beyond the context of high speed networks.

Journal ArticleDOI
TL;DR: The results demonstrate the differential motor latency of the radial nerve to be a sensitive electrodiagnostic tool in patients with radial tunnel syndrome.
Abstract: A modification of the standard electrodiagnostic test was developed in an effort to provide a more sensitive electrodiagnostic evaluation in radial tunnel syndrome. Radial motor nerve latency recordings were obtained in 3 different forearm positions: neutral, passive supination, and passive pronation. The maximal difference in these recordings, the differential latency, in 25 patients with radial tunnel syndrome of greater than 6 months duration (test group) was compared with those in 25 asymptomatic volunteers (control group). Differential latency recordings were obtained in all patients in the test group before and after surgery. Radial nerves that were compressed demonstrated a significantly greater differential latency (0.44±0.12 ms) versus controls (0.12±0.008 ms). Following radial nerve decompression, differential motor latencies in the test group decreased below control values, demonstrating a resolution of the provoked electrical response with a postoperative differential latency of 0.07±0.05 ms. Our results demonstrate the differential motor latency of the radial nerve to be a sensitive electrodiagnostic tool in patients with radial tunnel syndrome. A differential latency of ≥0.30 ms was considered indicative of radial tunnel syndrome.

Proceedings ArticleDOI
Byung Kook Kim1
27 Oct 1998
TL;DR: The author reveals that the control performance depends not only on the control period but also on the feedback latency (latency from sensing, computation to actuation), which is revealed to have more impact.
Abstract: A new task-scheduling algorithm with feedback latency is suggested for real-time control systems, which considers both point of views-control theoretic and real-time computing. Building a real-time control system has two steps in general. In the controller design stage, a control performance index is defined and a controller is designed which optimizes the given performance index while maintaining stability and rejecting disturbances. In the implementation stage, a set of controllers constitutes multiple control tasks, and scheduled to run on microprocessors, which should be schedulable with limited computing resources. The author reveals that the control performance depends not only on the control period but also on the feedback latency (latency from sensing, computation to actuation), which is revealed to have more impact. We formulate a new task-scheduling problem with a suitable control performance index including the feedback latency. An iterative search algorithm is suggested which is based on feedback latency computation method. An illustrative example demonstrated the applicability of the proposed method.

01 Jan 1998
TL;DR: This research is the first to model the execution of processing graphs with the real-time RBE model, and appears to be theFirst to identify and quantify inherent latency in processing graphs.
Abstract: Complex digital signal processing systems are commonly developed using directed graphs called processing graphs. Processing graphs are large grain dataflow graphs in which nodes represent processing functions and graph edges depict the flow of data from one node to the next. When sufficient data arrives, a node executes its function from start to finish without synchronization with other nodes, and appends data to the edge connecting it to a consumer node. We combine software engineering techniques with real-time scheduling theory to solve the problem of transforming a processing graph into a predictable real-time system in which latency can be managed. For signal processing graphs, real-time execution means processing signal samples as they arrive without losing data. Latency is defined as the time between when a sample of sensor data is produced and when the graph outputs the processed signal. We study a processing graph method, called PGM, developed by the U.S. Navy for embedded signal processing applications. We present formulae for computing node execution rates, techniques for mapping nodes to tasks in the rate-based-execution (RBE) task model, and conditions to verify the schedulability of the resulting task set under a rate-based, earliest-deadline-first scheduling algorithm. Furthermore, we prove upper and lower bounds for the total latency any sample will encounter in the system. We show that there are two sources of latency in real-time systems created from processing graphs: inherent and imposed latency. Inherent latency is the latency defined by the dataflow attributes and topology of the processing graph. Imposed latency is the latency imposed by the scheduling and execution of nodes m the graph. We demonstrate our synthesis method and the management of latency using three applications from the literature and industry: a synthetic aperture radar application, an INMARSAT mobile satellite receiver application, and an acoustic signal processing application from the ALFS anti-submarine warfare system. This research is the first to model the execution of processing graphs with the real-time RBE model, and appears to be the first to identify and quantify inherent latency in processing graphs.

Proceedings Article
01 Jan 1998
TL;DR: Performance measurements for some current operating systems, including NT4, Windows95, and Irix 6.4 are presented and it is found that NT4 and Windows95 suffer from both process scheduling delays and high audio output latency.
Abstract: Operating systems are often the limiting factor in creating low-latency interactive computer music systems Real-time music applications require operating system support for memory management, process scheduling, media I/O, and general development, including debugging We present performance measurements for some current operating systems, including NT4, Windows95, and Irix 64 While Irix was found to give rather good real-time performance, NT4 and Windows95 suffer from both process scheduling delays and high audio output latency The addition of WDM Streaming to NT and Windows offers some promise of lower latency, but WDM Streaming may actually make performance worse by circumventing priority-based scheduling

Proceedings ArticleDOI
13 Jul 1998
TL;DR: This work explores the balance of data ports in the cache memory hierarchy, and the effects of load and store aliasing in wide superscalar machines.
Abstract: Load execution latency is dependent on memory access latency, pipeline depth, and data dependencies. Through load effective address prediction both data dependencies and deep pipeline effects can potentially he removed from the overall execution time. If a load effective address is correctly predicted, the data cache can he speculatively accessed prior to execution, thus effectively reducing the latency of load execution. A hybrid load effective address prediction technique is proposed, using three basic predictors: Last Address Predictor (LAP), Stride Predictor (SP), and Global Dynamic Predictor (GDP). In addition to improving load address prediction accuracy, this work explores the balance of data ports in the cache memory hierarchy, and the effects of load and store aliasing in wide superscalar machines. Results: Using a realistic hybrid load address predictor, load address prediction rates range from 32% to 77% averaging 51% for SPECint95 and 60% to 96% averaging 87% for SPECfp95. For a wide superscalar machine with a significant number of execution resources, this prediction rate increases IPC by 12% and 19% for SPECint95 and SPECfp95, respectively. It is also shown that load/ store aliasing decreases the average IPC by 33 % for SPECint95 and 24 % for SPECfp95.

Journal ArticleDOI
TL;DR: In this paper, the impact of system parameters such as the update rate and total system latency on the perception of virtual acoustic displays has been discussed and engineering constraints that may be encountered when implementing interactive Virtual Acoustic displays are examined.
Abstract: Engineering constraints that may be encountered when implementing interactive virtual acoustic displays are examined. In particular, system parameters such as the update rate and total system latency are defined and the impact they may have on perception is discussed. For example, examination of the head motions that listeners used to aid localization in a previous study suggests that some head motions may be as fast as about 175°/s for short time periods. Analysis of latencies in virtual acoustic environments (VAEs) suggests that: (1) commonly specified parameters such as the audio update rate determine only the best‐case latency possible in a VAE, (2) total system latency and individual latencies of system components, including head‐trackers, are frequently not measured by VAE developers, and (3) typical system latencies may result in undersampling of relative listener‐source motion of 175°/s as well as positional instability in the simulated source. To clearly specify the dynamic performance of a particular VAE, users and developers need to make measurements to average system latency, update rate, and their variability using standardized rendering scenarios. Psychoacoustic parameters such as the minimum audible movement angle can then be used as target guidelines to assess whether a given system meets perceptual requirements.

Proceedings ArticleDOI
16 Apr 1998
TL;DR: A new technique used in the UltraSPARC III microprocessor, Sum-Addressed Memory (SAM), which performs true addition using the decoder of the RAM array, with very low latency is introduced, and other methods for reducing the add part of load latency are compared.
Abstract: Load latency contributes significantly to execution time. Because most cache accesses hit, cache-hit latency becomes an important component of expected load latency. Most modern microprocessors have base+offset addressing loads; thus effective cache-hit latency includes an addition as well as the RAM access.This paper introduces a new technique used in the UltraSPARC III microprocessor, Sum-Addressed Memory (SAM), which performs true addition using the decoder of the RAM array, with very low latency. We compare SAM with other methods for reducing the add part of load latency. These methods include sum-prediction with recovery, and bitwise indexing with duplicate-tolerance. The results demonstrate the superior performance of SAM.

Patent
Dean A. Klein1
14 Oct 1998
TL;DR: In this article, a method for controlling data transfer operations between a main memory and other devices in a computer system is described, where each of the latency identification values corresponds with a maximum time interval in which to service the respective data transfer request.
Abstract: A method is described for controlling data transfer operations between a main memory and other devices in a computer system. Data transfer request signals and associated latency identification values are received. Each of the latency identification values corresponds with a maximum time interval in which to service the respective data transfer request. The latency identification values are periodically modified and compared to indicate the current highest priority request. In the event that service of a particular requested data transfer operation must be provided imminently, priority override functionality is provided. In this way, those devices having particular latency requirements can be provided with timely access to the main memory, and need not have separately dedicated memory or buffers.

Journal ArticleDOI
TL;DR: Functional impairment of the cortico-striato-pallido-thalamo-cortical pathways from vascular disease, implicated in late-life depressive disorders, may explain not only deficits in initiation and errors in perseveration but also longer P300 latency in depressed elderly patients.
Abstract: OBJECTIVE: The purpose of this study was to determine if P300 latency is prolonged in geri~atric depression and if longer P300 latency and deficits in initiation and errors of perseveration in depressed elderly patients are related to risk factors for vascular disease. METHOD: Geriatric patients with unipolar depression (N=43) and elderly comparison subjects (N=24) were assessed for depressive symptoms, cognitive functions, risk factors for vascular disease, and P300 latency. RESULTS: Depressed elderly patients had longer P300 latency than normal elderly subjects. In the depressed patients, P300 latency was related to deficits in initiation and errors in perseveration. Risk factors for vascular disease were associated not only with P300 latency but also with deficits in initiation and errors in perseveration. CONCLUSIONS: Functional impairment of the cortico-striato-pallido-thalamo-cortical pathways from vascular disease, implicated in late-life depressive disorders, may explain not only deficits in initi...

Proceedings ArticleDOI
31 Jan 1998
TL;DR: This paper compares shared memory with and without prefetching, message passing with interrupts and with polling, and bulk transfer via DMA on the MIT Alewife multiprocessor to gain insight into the relative performance of communication mechanisms as bisection bandwidth and network latency vary.
Abstract: The goal of this paper is to gain insight into the relative performance of communication mechanisms as bisection bandwidth and network latency vary We compare shared memory with and without prefetching, message passing with interrupts and with polling, and bulk transfer via DMA We present two sets of experiments involving four irregular applications on the MIT Alewife multiprocessor First, we introduce I/O cross-traffic to vary bisection bandwidth Second, we change processor clock speeds to vary relative network latency We establish a framework from which to understand a range of results On Alewife, shared memory provides good performance, even on producer-consumer applications with little data-reuse On machines with lower bisection bandwidth and higher network latency, however, message-passing mechanisms become important In particular, the high communication volume of shared memory threatens to become difficult to support on future machines without expensive, high-dimensional networks Furthermore, the round-trip nature of shared memory may not be able to tolerate the latencies of future networks

Patent
24 Jul 1998
TL;DR: In this paper, a two-stage address generation method using pipelining was proposed to avoid one level of latency in certain address-generation situations, such as simple address generation and dependent address generation.
Abstract: The present invention is an apparatus and method for two-stage address generation that uses pipelining to avoid one level of latency in certain address-generation situations. The first level of the present invention contains redundant three-lever hardware that performs pre-add logic on 32-bit or 16-bit operands. The pre-add logic circuit for 32-bit operands comprises three carry-save adders. For 16-bit operands, the pre-add logic circuit comprises a four-port three-level 16-bit adder. The second stage comprises a three-logic level adder that adds two operands. The method of the present invention avoids one level of latency for simple address generation, although both stages are always utilized. For complex address generation, both latency cycles are required. Regarding dependent generation, the present invention provides a single-cycle latency bypass datapath that also avoids one level of latency.

Journal ArticleDOI
TL;DR: This quadratic regression simplifies the application of P300 latency across the life-span in the management of disorders affecting cognition, such as Traumatic Brain Injury, Attention Deficit-Hyperactivity Disorder, and Obstructive Sleep Apnea.
Abstract: The use of P300 latency to demonstrate cognitive dysfunction is important. P300 latency decreases with age in children and then increases with age in adults. It has been debated whether the relationship between age and P300 latency is linear or quadratic. If the relationship is linear, then at least two regression equations in opposite directions are required for children and for adults, and perhaps a third for the elderly. This is a report of data from an age-stratified sample of 97 normal individuals ages 5 through 85. The best regression equation is quadratic, using log transformed age, with accurate projection of 95% confidence limits for P300 latency by age. This quadratic regression simplifies the application of P300 latency across the life-span in the management of disorders affecting cognition, such as Traumatic Brain Injury, Attention Deficit-Hyperactivity Disorder, and Obstructive Sleep Apnea.

Journal Article
TL;DR: It is shown that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTPP1.0, and that a single stand-alone squid proxy cache does not always reduce response time for the authors' workloads.
Abstract: It is critical to understand WWW latency in order to design better HTTP protocols. In this study we c haracterize Web response time and examine the eeects of proxy caching, network bandwidth, traac load, persistent connections for a page, and periodicity. Based on studies with four workloads, we show that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTPP1.0. The distributions of connection time and elapsed time can be modeled using Pearson, Weibul, or Log-logistic distributions. Response times display strong daily and weekly patterns. We also characterize the eeect of a user's network bandwidth on response time. Average connection time from a client via a 33.6 K modem is two times longer than that from a client via switched Ethernet. We estimate the elapsed time savings from using persistent connections for a page to vary from about a quarter to a half. This study nds that a proxy caching server is sensitive to traac loads. Contrary to the typical thought about Web proxy caching, this study also nds that a single stand-alone squid proxy cache does not always reduce response time for our workloads. Implications of these results to future versions of the HTTP protocol and to Web application design are discussed.

Dissertation
22 Apr 1998
TL;DR: This study characterize Web response time and examines the effects of proxy caching, network bandwidth, traffic load, persistent connections for a page, and periodicity, finding that a proxy caching server is sensitive to traffic loads.
Abstract: (ABSTRACT) It is critical to understand WWW latency in order to design better HTTP protocols. In this study we characterize Web response time and examine the effects of proxy caching, network bandwidth, traffic load, persistent connections for a page, and periodicity. Based on studies with four workloads, we show that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTP/1.0. The distributions of connection time and elapsed time can be modeled using Pearson, Weibul, or Log-logistic distributions. We also characterize the effect of a user's network bandwidth on response time. Average connection time from a client via a 33.6 K modem is two times longer than that from a client via switched Ethernet. We estimate the elapsed time savings from using persistent connections for a page to vary from about a quarter to a half. Response times display strong daily and weekly patterns. This study finds that a proxy caching server is sensitive to traffic loads. Contrary to the typical thought about Web proxy caching, this study also finds that a single stand-alone squid proxy cache does not always reduce response time for our workloads. Implications of these results to future versions of the HTTP protocol and to Web application design also are discussed. Acknowledgments I thank Professor Edward A. Fox, my advisor, for his advice, help and support during my thesis and throughout my graduate program in the Department of Computer Science at Virginia Tech. I also thank the members of my advisory committee Professor Marc Abrams and Professor Roger W. Enrich for their comments and suggestions. Help and suggestions from other members of the Network Research Group (NRG) at Virginia Tech including Ghaleb Abdulla and Tommy Johnson also are appreciated. Grants from IBM and the National Science Foundation (NCR-9627922) partially supported my graduate study. IBM donated the equipment used to collect the traffic log files, conduct experiments and analyze the data. Finally, many thanks go to Wen Wang, my wife and Andy Liu, my son for their understanding and support during the course of my graduate studies.

Patent
Dean A. Klein1
14 Oct 1998
TL;DR: In this paper, an apparatus for controlling data transfer operations between a main memory and other devices in a computer system is described, where a memory controller receives data transfer request signals and associated latency identification values, each corresponding with a maximum time interval in which to service the respective data transfer requests.
Abstract: An apparatus is described for controlling data transfer operations between a main memory and other devices in a computer system. A memory controller receives data transfer request signals and associated latency identification values, each corresponding with a maximum time interval in which to service the respective data transfer requests. The latency identification values are periodically modified and compared to indicate the current highest priority request. In the event that service of a particular requested data transfer operation must be provided imminently, priority override circuitry is provided. In this way, those devices having particular latency requirements can be provided with timely access to the main memory, and need not have separately dedicated memory or buffers.

Patent
17 Apr 1998
TL;DR: In this paper, a PCI bus system comprising an initiator and a target, wherein data is transferred from the target via a PCI-bus in response to access from the initiator, a time interval period required from access to data transfer is stored as latency information in the target.
Abstract: In a PCI bus system comprising an initiator and a target, wherein data is transferred from the target via a PCI bus in response to access from the initiator, a time interval period required from access to data transfer is stored as latency information in the target. The latency information is transferred from the target to the initiator in response to access requests from the initiator. The initiator determines the next access timing from the relevant latency information. Thereby, a PCI bus occupation time due to repeated access requests can be shortened.