scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 1997"


Proceedings ArticleDOI
01 Oct 1997
TL;DR: This study investigates a novel multicast technique, called Skyscraper Broadcasting (SB), for video-on-demand applications, and is able to achieve the low latency of PB while using only 20% of the buffer space required by PPB.
Abstract: We investigate a novel multicast technique, called Skyscraper Broadcasting (SB), for video-on-demand applications. We discuss the data fragmentation technique, the broadcasting strategy, and the client design. We also show the correctness of our technique, and derive mathematical equations to analyze its storage requirement. To assess its performance, we compare it to the latest designs known as Pyramid Broadcasting (PB) and Permutation-Based Pyramid Broadcasting (PPB). Our study indicates that PB offers excellent access latency. However, it requires very large storage space and disk bandwidth at the receiving end. PPB is able to address these problems. However, this is accomplished at the expense of a larger access latency and more complex synchronization. With SB, we are able to achieve the low latency of PB while using only 20% of the buffer space required by PPB.

581 citations


Patent
01 Apr 1997
TL;DR: In this paper, a system is provided for transmitting data over a high latency communication link, where a data packet is transmitted from a first device to a second device over a low-latency communication link.
Abstract: A system is provided for transmitting data over a high latency communication link. The system transmits a data packet from a first device to a second device over a low latency communication link. The second device acknowledges receipt of the data packet to the first device over the low latency communication link. The second device then transmits the data packet over the high latency communication link. The high latency communication link may be a satellite communication link. The low latency communication link may be a Transmission Control Protocol/Internet Protocol (TCP/IP) communication link. The system may also acknowledge receipt of the data packet before completing transmission of the data packet over the high latency communication link. The system is also capable of transmitting data over a high bandwidth communication link or an asymmetrical communication link.

106 citations


Patent
01 May 1997
TL;DR: In this paper, a system and method for communication of information using channels of different latency combine a high-latency communication channel with a low latency communication channel to reduce the communication delay perceived by a user.
Abstract: A system and method for communication of information using channels of different latency combine a high latency communication channel with a low latency communication channel to reduce the communication delay perceived by a user. The system and method includes separating information into first and second components based on a parameter correlated to the perceived delay, communicating the first component via a first channel, communicating the second component via a second channel having a communication delay greater than the first channel, and generating a representation of the information based on the first component. The second component of the information may be used to augment or modify the information represented by the first component. In one embodiment, the invention is applied to video teleconferencing where voice information and basis image information is transmitted via the low latency channel while background and other non-real-time information is communicated via the high latency communication channel.

54 citations


Proceedings ArticleDOI
01 Nov 1997
TL;DR: BubbleUp significantly reduces the initial latency for new requests, as well as for fast-scan requests, and it may even provide better throughput than mechanisms based on elevator disk scheduling.
Abstract: Interactive multimedia applications require fast response time. Traditional disk scheduling schemes can incur high latencies, and caching data in memory to reduce latency is usually not feasible, especially if fast-scans need to be supported. In this study we propose a disk-based solution called BubbleUp. It significantly reduces the initial latency for new requests, as well as for fast-scan requests. The throughput of the scheme is comparable to that of traditional schemes, and it may even provide better throughput than mechanisms based on elevator disk scheduling. BubbleUp incurs a slight disk storage overhead, but we argue that through effective allocation, this cost can be minimized.

51 citations


Patent
Jared L. Zerbe1
18 Jul 1997
TL;DR: In this paper, the authors presented a receiver circuit having both a source-follower pair of MOS transistors, and a sourcecoupled pair of mos transistors coupled to a sense amplifier for low-swing input to full-rail CMOS.
Abstract: The present invention achieves the stated input receiver goals by merging many of the different functions required into a single unit instead of serializing them in the more traditional fashion. The present invention provides a receiver circuit having both a source-follower pair of MOS transistors, and a source-coupled pair of MOS transistors. The connecting node between these two pairs is coupled to a sense amplifier. The simultaneous use of the source-follower pair, the source-coupled pair and the sense-amplifier transistors allows for fast amplification of the low-swing input to full-rail CMOS, when triggered by a CMOS input clock.

50 citations


Patent
29 Aug 1997
TL;DR: In this article, a system for fast, efficient and reliable communication of object state irmation among a group of processes combines the use of a fast, but lossy and thus unreliable communications channel to the group of process and a server coupled with the group for providing data which has been lost in the multicasting.
Abstract: A system for fast, efficient and reliable communication of object state irmation among a group of processes combines the use of a fast, but lossy and thus unreliable communications channel to the group of processes and a server coupled to the group for providing data which has been lost in the multicasting In one embodiment, a central server supports reliability and rapid joining while using UDP multicast messaging to achieve rapid interaction and low bandwidth Differential messages are sent over the lossy channel to compactly describe how to compute the new state of an object from any of several previous states Such a description can be interpreted even if some number of prior descriptions were not received, greatly reducing the need for explicit, round-trip message repairs while also conserving bandwidth In one embodiment, the central server communicates with each member of the group over a reliable channel to robustly detect and repair objects affected by lost messages

47 citations


Journal ArticleDOI
28 Feb 1997
TL;DR: The preliminary performance measures obtained by GAMMA show how competitive such a cheap NOW is, and supply virtualization of the network interface close enough to the raw hardware to guarantee good performance.
Abstract: The cost of high-performance parallel platforms prevents parallel processing techniques from spreading in present applications. Networks of Workstations (NOW) exploiting off-the-shelf communication hardware, high-end PCs and standard communication software provide much cheaper but poorly performing parallel platforms. In our NOW prototype called GAMMA (Genoa Active Message MAchine) every node is a PC running a Linux operating system kernel enhanced with efficient communication mechanisms based on the Active Message paradigm. Active Messages supply virtualization of the network interface close enough to the raw hardware to guarantee good performance. The preliminary performance measures obtained by GAMMA show how competitive such a cheap NOW is.

16 citations


16 Jul 1997
TL;DR: Performance measurements have shown that this implementation of replicated distributed objects in asynchronous environments prone to node failures and network partitions incurs low latency and achieves high throughput while providing globally consistent replicated state machine semantics.
Abstract: This paper presents an implementation of replicated distributed objects in asynchronous environments prone to node failures and network partitions. This implementation has several appealing properties: It guarantees that progress will be made whenever a majority of replicas can communicate with each other; it allows minority partitions to continue providing service for idempotent requests; it offers the application the choice between optimistic or safe message delivery. Performance measurements have shown that our implementation incurs low latency and achieves high throughput while providing globally consistent replicated state machine semantics. The paper discusses both the protocols and interfaces to support efficient object replication at the application level.

12 citations


Proceedings ArticleDOI
19 Mar 1997
TL;DR: To close the gap between off-the-shelf microprocessors and the communication system a highly sophisticated processor interface implements atomic start of communication, MMU support, and a flexible event scheduling scheme.
Abstract: Fast and efficient communication is one of the major design goals not only for parallel systems but also for clusters of workstations. The proposed model of the high performance communication device ATOLL features very low latency for the start of communication operations and reduces the software overhead for communication specific functions. To close the gap between off-the-shelf microprocessors and the communication system a highly sophisticated processor interface implements atomic start of communication, MMU support, and a flexible event scheduling scheme. The interconnectivity of ATOLL provided by four independent network ports combined with cut-through routing allows the configuration of a large variety of network topologies. A software transparent error correction mechanism significantly reduces the required protocol overhead. The presented simulation results promise high performance and low-latency communication.

11 citations


Patent
30 May 1997
TL;DR: In this paper, the authors proposed a communication system for communicating low latency data in a fading channel environment using a data structure having data frames modulated on a subcarrier signal and on a commercial radio channel bandwidth.
Abstract: A communication system for communicating low latency data in a fading channel environment using a data structure having data frames modulated on a subcarrier signal and on a commercial radio channel bandwidth. Low latency data is updated relatively rapidly at the low latency data generator. Because of this constant changing of low latency data at the point of generation in the transmitter end, the present invention communicates low latency data from the transmitter end to the receiver end of the communication system with relatively low delay. This communication system encodes and decodes low latency data using a single block encoding/decoding scheme. Such an encoding/decoding scheme introduces relatively less delay during the encoding/decoding processes while ensuring satisfactory data integrity for data communication in a fading channel environment. Using such an encoding/decoding scheme leads to more currently generated low latency data transmission at the transmitter end and in relatively less time for the decoding process with the decoder structure being more simple at the receiver end.

8 citations


Proceedings ArticleDOI
05 Aug 1997
TL;DR: The PDSS network interface provides a low-latency interface between the network and the processing nodes that allows unprivileged code to initiate network operations while maintaining a high level of protection.
Abstract: The Packaging-Driven Scalable Systems multicomputer (PDSS) project uses several innovative interconnect and routing techniques to construct a low-latency, high-bandwidth (1.3 GB/s) multicomputer network. The PDSS network interface provides a low-latency interface between the network and the processing nodes that allows unprivileged code to initiate network operations while maintaining a high level of protection. The interface design exploits processor-bus cache coherence protocols to deliver very-low-latency cache-to-cache communications between processing nodes. Network operations include a variety of transfers of cache-line-sized packets, including remote read and write, and a distributed barrier-synchronization mechanism. Despite performance-limiting flaws, the initial single-chip implementation of the network router and interface achieves gigabit/s bandwidth and microsecond cache-to-cache latencies between nodes using commodity processor and memory components.

Journal ArticleDOI
TL;DR: This work examines the impact of network interfaces that provide users with very low-latency access to the memory of remote machines on the performance of these networks.
Abstract: Recent technological advances have produced network interfaces that provide users with very low-latency access to the memory of remote machines. We examine the impact of such networks on the implem...

Proceedings ArticleDOI
11 Aug 1997
TL;DR: The one way protected message passing latency on the SNOW prototype for a 64-byte message is about 9 /spl mu/secs.
Abstract: This paper describes the implementation of a low latency protected message passing facility and a low latency barrier synchronization mechanism for an experimental, tightly-coupled network of workstations called SNOW: SNOW uses multiprocessing SPARC 20s, running Solaris 2.4, as computing nodes, and uses semi-custom network interface cards (NICs) that connect these nodes in a 212 Mbits/sec. unidirectional ring. The NICs include field-programmable gate array logic devices that allow for experimentation with the nature and level of hardware support for tight coupling. The one way protected message passing latency on the SNOW prototype for a 64-byte message is about 9 /spl mu/secs., comparable to latencies of low-end to medium range multiprocessors.

Proceedings ArticleDOI
04 Apr 1997
TL;DR: In this article, the authors discuss the design of a possible opto-electronic implementation of the SOM-Bus along with an optical power budget analysis, as well as a low cost novel means of interconnecting 10 to 120 processors.
Abstract: Low latency, high bandwidth interconnecting networks that directly link arbitrary pairs of processing elements without contention are very desirable for parallel computers. The simultaneous optical multiprocessor exchange bus (SOME-Bus) based on a fiber optic interconnect is such a network. The SOME-Bus provides a dedicated channel for each processor for data output and thus eliminates global arbitration. Each processor can receive data simultaneously from all other processors in the system using an array of receivers. The architecture allow for simultaneous multicast and broadcast messages using several processors with zero setup time and no global scheduling. In this paper, we discuss the design of a possible opto-electronic implementation of the SOME-Bus along with an optical power budget analysis. Slant Bragg fiber grains arranged to couple light out of a fiber ribbon cable into an array of amorphous silicon detectors vertically integrated on a silicon are presented as a low cost novel means of interconnecting 10 to 120 processors.

01 Jan 1997
TL;DR: Ties and consequences in the context of designing a network rlow m at hat us ng ihis tly this sed W ssor sor for re in ose he a g, en esith rs.
Abstract: ties and consequences in the context of designing a network rlow m at hat us ng ihis tly this sed W ssor sor for re in ose he a g, en esith rs. don ion

Proceedings ArticleDOI
22 Sep 1997
TL;DR: In this paper, an all-optical regenerative memory with variable storage threshold and amplitude restoration was demonstrated, which can be scaleable to high data rates approaching 100 Gbits and to low latency memory stores using integrated TOAD/SLALOM nonlinear optical switching devices.
Abstract: We demonstrate an all-optical memory which can discriminate between input data pulses of differing amplitude. Only optical pulses with amplitudes above a predetermined threshold are stored and their amplitudes are equalised. We have demonstrated an all-optical regenerative memory with variable storage threshold and amplitude restoration. This functionality should be scaleable to high data rates approaching 100 Gbits and to low latency memory stores using integrated TOAD/SLALOM nonlinear optical switching devices.

Journal ArticleDOI
TL;DR: Fault tolerance features of a multiprocessor system called SPAX (Scalable Parallel Architecture based on X-bar network) designed to eliminate potential single-points of failures and which has been implemented at ETRI.