scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2001"


Proceedings ArticleDOI
22 Apr 2001
TL;DR: A capacity estimation methodology that has been implemented in a tool called pathrate and derived the effect of the cross traffic in the dispersion of long packet trains, showing that the ADR is not the available bandwidth in the path, as was assumed in previous work.
Abstract: The packet pair technique estimates the capacity of a path (bottleneck bandwidth) from the dispersion (spacing) experienced by two back-to-back packets. We demonstrate that the dispersion of packet pairs in loaded paths follows a multimodal distribution, and discuss the queueing effects that cause the multiple modes. We show that the path capacity is often not the global mode, and so it cannot be estimated using standard statistical procedures. The effect of the size of the probing packets is also investigated, showing that the conventional wisdom of using maximum sized packet pairs is not optimal. We then study the dispersion of long packet trains. Increasing the length of the packet train reduces the measurement variance, but the estimates converge to a value, referred to as the asymptotic dispersion rate (ADR), that is lower than the capacity. We derive the effect of the cross traffic in the dispersion of long packet trains, showing that the ADR is not the available bandwidth in the path, as was assumed in previous work. Putting all the pieces together, we present a capacity estimation methodology that has been implemented in a tool called pathrate.

610 citations


Patent
04 May 2001
TL;DR: In this article, a system and method for facilitating packet transformation of multi-protocol, multi-flow, streaming data is presented, where packet portions subject to change are temporarily stored, and acted upon through processing of protocoldependent instructions, resulting in a protocol-dependent modification of the temporarily stored packet information.
Abstract: A system and method for facilitating packet transformation of multi-protocol, multi-flow, streaming data. Packet portions subject to change are temporarily stored, and acted upon through processing of protocol-dependent instructions, resulting in a protocol-dependent modification of the temporarily stored packet information. Validity tags are associated with different segments of the temporarily-stored packet, where the state of each tag determines whether its corresponding packet segment will form part of the resulting modified packet. Only those packet segments identified as being part of the resulting modified packet are reassembled prior to dispatch of the packet.

378 citations


Proceedings ArticleDOI
22 Apr 2001
TL;DR: A lazy online algorithm that varies transmission times according to backlog and is shown to be more energy efficient than a deterministic schedule that guarantees stability for the same range of arrival rates is devised.
Abstract: The paper considers the problem of minimizing the energy used to transmit packets over a wireless link via lazy schedules that judiciously vary packet transmission times. The problem is motivated by the following key observation: in many channel coding schemes, the energy required to transmit a packet can be significantly reduced by lowering the transmission power and transmitting the packet over a longer period of time. However, information is often time-critical or delay-sensitive and transmission times cannot be made arbitrarily long. We therefore consider packet transmission schedules that minimize energy subject to a deadline or a delay constraint. Specifically, we obtain an optimal offline schedule for a node operating under a deadline constraint. An inspection of the form of this schedule naturally leads us to an online schedule which is shown, through simulations, to be energy-efficient. Finally, we relax the deadline constraint and provide an exact probabilistic analysis of our offline scheduling algorithm. We then devise a lazy online algorithm that varies transmission times according to backlog and show that it is more energy efficient than a deterministic schedule that guarantees stability for the same range of arrival rates.

371 citations


Proceedings ArticleDOI
06 Aug 2001
TL;DR: A new technique, called modulation scaling, is introduced, which exhibits benefits similar to those of voltage scaling and allows us to trade off energy against transmission delay and as such introduces the notion of energy awareness in communications.
Abstract: In systems that require low energy consumption, voltage scaling is an invaluable circuit technique. It also offers energy awareness, trading off energy and performance. In wireless handheld devices, the communication portion of the system is a major power hog. We introduce a new technique, called modulation scaling, which exhibits benefits similar to those of voltage scaling. It allows us to trade off energy against transmission delay and as such introduces the notion of energy awareness in communications. Throughout our discussion, we emphasize the analogy with voltage scaling. As an example application, we present an energy aware wireless packet scheduling system.

283 citations


Patent
15 May 2001
TL;DR: In this article, a packet interceptor/processor is coupled with the network so as to be able to intercept and process packets flowing over the network and provides external connectivity to other devices that wish to intercept packets as well.
Abstract: An apparatus and method for enhancing the infrastructure of a network such as the Internet is disclosed. A packet interceptor/processor apparatus is coupled with the network so as to be able to intercept and process packets flowing over the network. Further, the apparatus provides external connectivity to other devices that wish to intercept packets as well. The apparatus applies one or more rules to the intercepted packets which execute one or more functions on a dynamically specified portion of the packet and take one or more actions with the packets. The apparatus is capable of analyzing any portion of the packet including the header and payload. Actions include releasing the packet unmodified, deleting the packet, modifying the packet, logging/storing information about the packet or forwarding the packet to an external device for subsequent processing. Further, the rules may be dynamically modified by the external devices.

237 citations


Journal ArticleDOI
TL;DR: This paper analyzes the behavior of packet-switched communication networks in which packets arrive dynamically at the nodes and are routed in discrete time steps across the edges, and provides the first examples of a protocol that is stable for all networks, and a Protocol that is not stable forall networks.
Abstract: In this paper, we analyze the behavior of packet-switched communication networks in which packets arrive dynamically at the nodes and are routed in discrete time steps across the edges. We focus on a basic adversarial model of packet arrival and path determination for which the time-averaged arrival rate of packets requiring the use of any edge is limited to be less than 1. This model can reflect the behavior of connection-oriented networks with transient connections (such as ATM networks) as well as connectionless networks (such as the Internet).We concentrate on greedy (also known as work-conserving) contention-resolution protocols. A crucial issue that arises in such a setting is that of stability—will the number of packets in the system remain bounded, as the system runs for an arbitrarily long period of time? We study the universal stability of network (i.e., stability under all greedy protocols) and universal stability of protocols (i.e., stability in all networks). Once the stability of a system is granted, we focus on the two main parameters that characterize its performance: maximum queue size required and maximum end-to-end delay experienced by any packet.Among other things, we show:(i) There exist simple greedy protocols that are stable for all networks.(ii) There exist other commonly used protocols (such as FIFO) and networks (such as arrays and hypercubes) that are not stable.(iii) The n-node ring is stable for all greedy routing protocols (with maximum queue-size and packet delay that is linear in n).(iv) There exists a simple distributed randomized greedy protocol that is stable for all networks and requires only polynomial queue size and polynomial delay.Our results resolve several questions posed by Borodin et al., and provide the first examples of (i) a protocol that is stable for all networks, and (ii) a protocol that is not stable for all networks.

208 citations


01 Jan 2001
TL;DR: In this article, the authors studied the stability of NCSs with time-varying transmission period under an ideal transmission process, i.e., no transmission delay or packet dropout, and derived sufficient conditions on the transmission period that guarantee the NCS will be stable.
Abstract: Feedback control systems wherein the control loops are closed through a realtime network are called networked control systems (NCSs) The defining feature of an NCS is that information (reference input, plant output, control input, etc) is exchanged using a network among control system components (sensors, controller, actuator, etc) The primary advantages of an NCS are reduced system wiring, ease of system diagnosis and maintenance, and increased system agility The insertion of the communication network in the feedback control loop makes the analysis and design of an NCS complex Conventional control theories with many ideal assumptions, such as equal-distance sensing, synchronized control, and non delayed sensing and actuation, must be reevaluated before they can be applied to NCSs We first study the stability of NCSs with time-varying transmission period Under an ideal transmission process, ie no transmission delay or packet dropout, we have derived sufficient conditions on the transmission period that guarantee the NCS will be stable We also discuss methods to numerically search for such bounds on the transmission period Following this, we study network scheduling when a set of NCSs are connected to the network and arbitrating for network bandwidth We first define the basic concepts of network scheduling in NCSs Then, we apply the rate monotonic scheduling algorithm to schedule a set of NCSs We also formulate the optimal scheduling problem under both rate-monotonic-schedulability constraints and NCS-stability constraints, and give an example of how such optimization is carried out Next, the assumptions of ideal transmission are relaxed We study the stability of NCSs with network-induced delay We first model NCSs with network-induced delay, and analyze the stability of such NCSs using the notion of stability region and a hybrid systems stability analysis technique We also discuss a scheme to compensate for network-induced delay based on the time-domain solution of the closed-loop system Experiments over a physical network are also carried out to validate the scheme Finally, we study the stability of NCSs with data packet dropout and multiple-packet transmissions We model these scenarios as asynchronous dynamical systems (ADSs) and extend stability conditions of ADS to these setups We also come back to the problem of network scheduling in NCSs

205 citations


Patent
23 Aug 2001
TL;DR: In this article, an inter-chip communication system includes a series of event detectors (3062, 3063, 3064) that detect changes in signal values and packet schedulers (3036, 3037, 3038) which can then schedule the transfer of these changed signal values to another designated chip.
Abstract: The inter-chip communication system transfers signals across chip boundaries only when the signals change value. Thus no signals are wasted and every event signal has a fair chance of achieving communication across the boundary. The system includes a series of event detectors (3062, 3063, 3064) that detect changes in signal values and packet schedulers (3036, 3037, 3038) which can then schedule the transfer of these changed signal values to another designated chip. Working with a plurality of signal groups that represents signals at the separated connections, the event detectors (3062, 3063, 3064) detect events or changes in the signal values. When an event ro signal change has been detected, the event detector alerts the packet scheduler (3036, 3037, 3038). The packet scheduler employs a token ring scheme. When the scheduler receives a token and detects an event, the packet scheduler grabs the token and schedules the transmission of this packet in the next packet cycle. If, however, the packet scheduler receives the token but does not detect and event, it will pass the token to the next scheduler. At the end of each packet cycle, the packet scheduler that grabbed the token will pass the token to the next logic associated with another packet.

183 citations


Proceedings ArticleDOI
10 Dec 2001
TL;DR: This paper proposes a receiver-driven transport protocol to coordinate simultaneous transmissions of video from multiple senders to achieve higher throughput, and to increase tolerance to packet loss and delay due to network congestion.
Abstract: With the explosive growth of video applications over the Internet, many approaches have been proposed to stream video effectively over packet switched, best-effort networks. A number of these use techniques from source and channel coding, or implement transport protocols, or modify system architectures in order to deal with delay, loss, and time-varying nature of the Internet. In this paper, we propose a framework for streaming video from multiple senders simultaneously to a single receiver. The main motivation in doing so is to exploit path diversity in order to achieve higher throughput, and to increase tolerance to packet loss and delay due to network congestion. In this framework, we propose a receiver-driven transport protocol to coordinate simultaneous transmissions of video from multiple senders. Our protocol employs two algorithms: rate allocation and packet partition. The rate allocation algorithm determines sending rate for each sender to minimize the packet loss, while the packet partition algorithm minimizes the probability of packets arriving late. Using NS and actual Internet experiments, we demonstrate the effectiveness of our proposed distributed transport protocol in terms of the overall packet loss rate, and compare its performance against a na*ve distributed protocol.

180 citations


Patent
21 Feb 2001
TL;DR: A group packet encapsulation and compression method is proposed in this article, where packets queued at a node configured in accordance with the present invention are classified, grouped, and encapsulated into a single packet as a function of having another such configured node in their path.
Abstract: A group packet encapsulation and (optionally) compression system and method, including an encapsulation protocol increases packet transmission performance between two gateways or host computers by reducing data-link layer framing overhead, reducing packet routing overhead in gateways, compressing packet headers in the encapsulation packet, and increasing loss-less data compression ratio beyond that otherwise achievable in typical systems. Packets queued at a node configured in accordance with the present invention are classified, grouped, and encapsulated into a single packet as a function of having another such configured node in their path. The nodes exchange encapsulation packets, even though the packets within the encapsulation packet may ultimately have different destinations. Compression within an encapsulation packet may be performed on headers, payloads, or both.

161 citations


Patent
20 Dec 2001
TL;DR: In this paper, a system and method are disclosed for transparently proxying a connection to a protected machine, which includes monitoring a communication packet on a network at a proxy machine.
Abstract: A system and method are disclosed for transparently proxying a connection to a protected machine. The method includes monitoring a communication packet on a network at a proxy machine. The communication packet has a communication packet source address, a communication packet source port number, a communication packet destination address, and a communication packet destination port number. The proxy determines whether to intercept the communication packet based on whether the communication packet destination address and the communication packet destination port number correspond to a protected destination address and a protected destination port number stored in a proxy list. The proxy then determines whether to proxy a proxied connection associated with the communication packet based on the communication packet source address and the communication packet source port number. A protected connection is terminated from the proxy machine to a protected machine. The protected machine corresponds to the communication packet destination address and the communication packet destination port number. A response is formed to the communication packet under a network protocol by sending a responsive packet from the proxy machine. The responsive packet has a header having a responsive packet source address and a responsive packet source port number such that the responsive packet source address and the responsive packet source port number are the same as to the communication packet destination source address and the communication packet destination port number.

Journal ArticleDOI
TL;DR: In this article, the packet loss and delay performance of an arrayed-waveguide-grating-based (AWG) optical packet switch developed within the EPSRC-funded project WASPNET (wavelength switched packet network).
Abstract: This paper analyzes the packet loss and delay performance of an arrayed-waveguide-grating-based (AWG) optical packet switch developed within the EPSRC-funded project WASPNET (wavelength switched packet network). Two node designs are proposed based on feedback and feed-forward strategies, using sharing among multiple wavelengths to assist in contention resolution. The feedback configuration allows packet priority routing at the expense of using a larger AWG. An analytical framework has been established to compute the packet loss probability and delay under Bernoulli traffic, justified by simulation. A packet loss probability of less than 10/sup -9/ was obtained with a buffer depth per wavelength of 10 for a switch size of 16 inputs-outputs, four wavelengths per input at a uniform Bernoulli traffic load of 0.8 per wavelength. The mean delay is less than 0.5 timeslots at the same buffer depth per wavelength.

Proceedings ArticleDOI
22 Apr 2001
TL;DR: It is concluded that priority queuing is the most appropriate scheduling scheme for the handling of voice traffic, while preemption of non-voice packets is strongly recommended for sub-10 Mbit/s links.
Abstract: In the future, voice communication is expected to migrate from the public switched telephone network (PSTN) to the Internet. Because of the particular characteristics (low volume and burstiness) and stringent delay and loss requirements of voice traffic, it is important to separate voice traffic from other traffic in the network by providing it with a separate queue. In this study, we conduct a thorough assessment of voice delay in this context. We conclude that priority queuing is the most appropriate scheduling scheme for the handling of voice traffic, while preemption of non-voice packets is strongly recommended for sub-10 Mbit/s links. We also find that per-connection custom packetization is in most cases futile, i.e. one packet size allows a good compromise between an adequate end-to-end delay and an efficient bandwidth utilization for voice traffic.

Patent
18 Sep 2001
TL;DR: In this paper, a method for time-based synchronization of multiple media streams transmitted over a communications network, such as the Internet, by multiple, independent streaming media sources is presented.
Abstract: A method for time-based synchronization of multiple media streams transmitted over a communications network, such as the Internet, by multiple, independent streaming media sources. First and second media streams of data packets are received from first and second media sources. Timing data is parsed from the two media streams, and first and second transmission delay values are determined by comparing the timing data with a reference time. A synchronized media stream is created by combining the first and second media streams into a time-synchronized media stream with adjustments to correct for calculated transmission delay values. Feedback signals are sent to the media sources to control transmission variables such as stream length, transmission rate, and transmittal time to manage the variable delay at the media source. The first and second media streams are decoded into intermediate media streams compatibly formatted to allow mixing of teh strams and data packets.

Journal ArticleDOI
W. Bux1, W.E. Denzel, T. Engbersen, Andreas Herkersdorf, Ronald P. Luijten 
TL;DR: The state of the art and the future of packet processing and switching are reviewed, and architectural and design issues that must be addressed to allow the evolution of packet switch fabrics to terabit-per-second throughput performance are discussed.
Abstract: We provide a review of the state of the art and the future of packet processing and switching. The industry's response to the need for wire-speed packet processing devices whose function can be rapidly adapted to continuously changing standards and customer requirements is the concept of special programmable network processors. We discuss the prerequisites of processing tens to hundreds of millions of packets per second and indicate ways to achieve scalability through parallel packet processing. Tomorrow's switch fabrics, which will provide node-internal connectivity between the input and output ports of a router or switch, will have to sustain terabit-per-second throughput. After reviewing fundamental switching concepts, we discuss architectural and design issues that must be addressed to allow the evolution of packet switch fabrics to terabit-per-second throughput performance.

Proceedings ArticleDOI
06 May 2001
TL;DR: This paper evaluates the optimum broadband packet wireless access scheme using a 50-100 MHz bandwidth to achieve highspeed packet transmission taking into consideration the realization of high-speed and high-capacity packet transmission, the accommodation of variable rate services with different quality requirements, and the extendibility for and commonality to the IMT-2000.
Abstract: This paper evaluates the optimum broadband packet wireless access scheme using a 50-100 MHz bandwidth to achieve highspeed packet transmission taking into consideration the realization of high-speed and high-capacity packet transmission, the accommodation of variable rate services with different quality requirements, and the extendibility for and commonality to the IMT-2000. In the forward link, the presented simulation results associated with the evaluation of key features show that multi-carrier CDMA (MC-CDMA) is the most promising candidate, since it achieves the highest-capacity packet transmission in a severe multipath fading channel due to a lower symbol-rate with numerous sub-carriers and a channel coding gain accompanied by the frequency diversity effect. Meanwhile, single-carrier/DS-CDMA and multi-carrier/DS-CDMA (SC/DS-CDMA and MC/DS-CDMA) are severely degraded due to significantly increasing multipath interference, and orthogonal frequency division multiplexing (OFDM) lacks efficient cell deployment due to the necessity of frequency reuse in a multi-cell environment without complicated dynamic channel assignment (DCA). Furthermore, in the reverse link, we conclude that MC(SC)/DS-CDMA is a promising candidate because it has advantages such as an effective random access method, soft handover, and an inherent property for accurate location detection.

Patent
14 Dec 2001
TL;DR: A packet scheduler includes a packet manager interface, a policer, a congestion manager, a scheduler, and a virtual output queue (VOQ) handler as mentioned in this paper, which assigns a priority to each packet.
Abstract: A packet scheduler includes a packet manager interface, a policer, a congestion manager, a scheduler, and a virtual output queue (VOQ) handler. The policer assigns a priority to each packet. Depending on congestion levels, the congestion manager determines whether to send a packet based on the packet's priority assigned by the policer. The scheduler schedules packets in accordance with configured rates for virtual connections and group shapers. A scheduled packet is queued at a virtual output queue (VOQ) by the VOQ handler. In one embodiment, the VOQ handler sends signals to a packet manager (through the packet manager interface) to instruct the packet manager to transmit packets in a scheduled order.

Proceedings ArticleDOI
06 Jul 2001
TL;DR: It is proved that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm, and the competitive ratio of any online algorithm for a uniform bounded delay buffer is bounded away from 1, independent of the delay size.
Abstract: We consider two types of buffering policies that are used in network switches supporting QoS (Quality of Service). In the FIFO type, packets must be released in the order they arrive; the difficulty in this case is the limited buffer space. In the bounded-delay type, each packet has a maximum delay time by which it must be released, or otherwise it is lost. We study the cases where the incoming streams overload the buffers, resulting in packet loss. In our model, each packet has an intrinsic value; the goal is to maximize the total value of packets transmittedOur main contribution is a thorough investigation of the natural greedy algorithms in various models. For the FIFO model we prove tight bounds on the competitive ratio of the greedy algorithm that discards the packets with the lowest value. We also prove that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm. This algorithm can be as much as 1.5 times better than the standard tail-drop policy, that drops the latest packets.In the bounded delay model we show that the competitive ratio of any online algorithm for a uniform bounded delay buffer is bounded away from 1, independent of the delay size. We analyze the greedy algorithm in the general case and in three special cases: delay bound 2; link bandwidth 1; and only two possible packet values.Finally, we consider the off-line scenario. We give efficient optimal algorithms and study the relation between the bounded-delay and FIFO models in this case.

Patent
17 Oct 2001
TL;DR: In this paper, the authors proposed a random access channel (RACH) access apparatus for mobile satellite communication system and method therefor, the method includes the steps of: receiving preamble and the message, the message successively transmitted with the pre-amble from the plurality of mobile stations; and transmitting acquisition response signal corresponding to the preambble or the message to the mobile stations.
Abstract: The present invention relates to Random Access Channel (RACH) access apparatus for mobile satellite communication system and method therefor. The method for accessing random access channel (RACH) on satellite system, random access channel (RACH) carrying message from a plurality of mobile stations to the satellite system, the method includes the steps of: receiving preamble and the message, the message successively transmitted with the preamble from the plurality of mobile stations; and transmitting acquisition response signal corresponding to the preamble or the message to the plurality of mobile stations. Accordingly, success of packet reception of satellite system is improved and transmission delay is reduced.

Patent
26 Feb 2001
TL;DR: In this article, a sender converts a non-compressed packet, which is to be transmitted, into a full-header packet including a full header or a header compressed packet, and sends the converted packet to a receiver The receiver receives the packet transmitted from the sender, and converts the received packet into a decompressed packet.
Abstract: A sender converts a non-compressed packet, which is to be transmitted, into a full-header packet including a full header or a header-compressed packet including a compressed header, and sends the converted packet to a receiver The receiver receives the packet transmitted from the sender, and converts the received packet into a decompressed packet In cases the full-header packet or header-compressed packet is lost between the sender and receiver, the receiver keeps header-compressed packers received during an interval from the packet loss to the next earliest reception of a full-header packet, and decompresses the compressed headers of the kept header-compressed packets on the basis of contents of the full header of the full-header packet

Patent
30 Aug 2001
TL;DR: In this article, a protocol for differentiating congestion-related packet loss versus random packet loss in a wireless data connection is proposed, where packet loss is preceded by an increase in the queue length over two consecutive intervals.
Abstract: A protocol for differentiating congestion-related packet loss versus random packet loss in a wireless data connection. The protocol monitors changes in the length of a transmission queue in a wireless data connection over an interval substantially equal to the amount of time it takes to transmit a window of data packets and receive acknowledgements corresponding to all data packets transmitted in the window. If packet loss is preceded by an increase in the queue length over two consecutive intervals, the packet loss is designated as being due to congestion and a congestion avoidance algorithm is initiated. Otherwise, the packet loss is designated as random loss and the transmission window is maintained at its current size. The protocol reduces the transmission rate only when congestion is identified as the cause of lost packets; otherwise wireless losses can simply be quickly retransmitted without a reduction in the data transmission rate.

Proceedings ArticleDOI
22 Apr 2001
TL;DR: A new definition of the expedited forwarding per-hop behaviour is proposed, called "packet scale rate guarantee", which preserves the spirit of RFC 2598, while allowing a number of reasonable implementations, and has very useful properties for per-node and end-to-end network engineering.
Abstract: We consider the definition of the expedited forwarding per-hop behaviour (EF PHB) as given in RFC 2598 (Jacobsen et al. 1999), and its impact on worst case end-to-end delay jitter. On one hand, the definition in RFC 2598 can be used to predict extremely low end-to-end delay jitter, independent of the network scale. On the other hand, we find that the worst case delay jitter can be made arbitrarily large, while each flow traverses at most a specified number of hops, if we allow networks to become arbitrarily large; this is in contradiction with the previous statement. We analyze where the contradiction originates, and find the explanation. It resides in the fact that the definition in RFC 2598 is not easily implementable in schedulers we know of, mainly because it is not formal enough, and also because it does not contain an error term. We propose a new definition for the EF PHB, called "packet scale rate guarantee", which preserves the spirit of RFC 2598, while allowing a number of reasonable implementations, and has very useful properties for per-node and end-to-end network engineering. We show that this definition implies the rate-latency service curve guarantee. Then we propose some proven bounds on delay jitter for networks implementing this new definition, both in cases without loss and with loss.

Proceedings ArticleDOI
01 Nov 2001
TL;DR: The methodology of the experiment, the architecture of the NACK-based streaming application, end-to-end dynamics of 16 thousand ten-minute sessions (85 million packets), and the behavior of the following network parameters are described.
Abstract: In this paper, we analyse the results of a seven-month real-time streaming experiment, which was conducted between a number of unicast dialup clients, connecting to the Internet through access points in more than 600 major U.S. cities, and a backbone video server. During the experiment, the clients streamed low-bitrate MPEG-4 video sequences from the server over paths with more than 5,000 distinct Internet routers. We describe the methodology of the experiment, the architecture of our NACK-based streaming application, study end-to-end dynamics of 16 thousand ten-minute sessions (85 million packets), and analyze the behavior of the following network parameters: packet loss, round-trip delay, one-way delay jitter, packet reordering, and path asymmetry. We also study the impact of these parameters on the quality of real-time streaming.

Patent
07 Sep 2001
TL;DR: In this article, a hybrid ARQ scheme with incremental data packet combining employs three feedback signaling commands: ACK, NACK, and LOST, which provides both robustness and good performance.
Abstract: The invention relates to a hybrid ARQ scheme with incremental data packet combining. In an example embodiment, the hybrid ARQ scheme with incremental data packet combining employs three feedback signaling commands: ACK, NACK, and LOST. Using these three feedback commands, the hybrid ARQ scheme with incremental data packet combining is provides both robustness and good performance. The invention is particularly advantageous in communication systems with unreliable communication channels, e.g., a fading radio channel, where forward error correction (FEC) codes are used, some of the code symbols being more important than other code symbols. The benefits of the invention are increased throughput and decreased delay of the packet data communication.

Patent
Joseph Rinchiuso1
09 Jan 2001
TL;DR: In this article, an apparatus and method that schedules and allocates data transmissions over communication channels within a broad-band communications system is provided, where data transmissions are first scheduled with priority to data service users granted access to radio resources first, until more data transmissions require service than there are radio resources available.
Abstract: An apparatus and method that schedules and allocates data transmissions over communication channels within a broad-band communications system is provided. Data transmissions are first scheduled with priority to data service users granted access to radio resources first, until more data transmissions require service than there are radio resources available. Then, data transmissions are scheduled according to a resource scheduling priority 135 as determined by a resource scheduling function within a resource scheduling and allocation algorithm 300, 500 . The resource scheduling function considers various communications parameters (e.g., frame count, transmission time, number of data frames queued, signal/noise ratio, frame error rate (FER), bit error rate (BER), transmission delay, jitter, etc.) and can be implemented to treat data service requests proportionately (algorithm 300 ) or disproportionately (algorithm 500 ). Once assigned, allocation of the radio resource to a given data transmission is based on a resource allocation parameter 140 (e.g., frame count, transmission time).

Proceedings ArticleDOI
22 Apr 2001
TL;DR: It is shown that the maximal number of independent measurements that can be taken is smaller hence a procedure for estimating the one-way delay is proposed, and the basic idea of the approach is to express the cyclic-path delays in terms of one- way delay variables.
Abstract: In this paper we present a novel approach for the estimation of one-way delays from cyclic-path delay measurements that does not require any kind of synchronization among the nodes of the network. Furthermore, this approach takes into account the asymmetric nature of the network, and the fact that traffic flows are not necessarily the same in both directions. Our approach is based on cyclic-path delay measurements, each of which is extracted using a single (source) clock and therefore is accurate. The basic idea of the approach is to express the cyclic-path delays in terms of one-way delay variables. If there were enough independent cyclic-path delay measurements, then one could solve explicitly for the one-way delays. We show that the maximal number of independent measurements that can be taken is smaller hence a procedure for estimating the one-way delay is proposed.

Patent
23 Feb 2001
TL;DR: In this paper, a queue engine is described that is operable to reorder and reassemble data packets from network traffic into unfragmented and in order traffic flows for applications such as deep packet classification and quality of service determination.
Abstract: A queue engine is described that is operable to reorder and reassemble data packets from network traffic into unfragmented and in order traffic flows for applications such as deep packet classification and quality of service determination The queue engine stores incoming data packets in a packet memory that is controlled by a link list controller A packet assembler extracts information from each data packet, particularly fields from the header information, and uses that information among other things, to determine if the data packet is fragmented or out of order, and to associate the data packet with a session id If the packet is determined to be out of order, the queue engine includes a reordering unit which is able to modify links with the link list controller to reorder data packets A fragment reassembly unit is also included which is capable of taking fragments and reassembling them into complete unfragmented data packets The reordered and reassembled data packets are then sent to an output where further operations such as deep packet classification can take place

Patent
02 Jul 2001
TL;DR: In this paper, the authors describe a method for transmitting and forwarding packets over a switching network using time information, where the network switches maintain a common time reference, which is obtained either from an external source (such as GPS) or is generated and distributed internally.
Abstract: This invention describes a method for transmitting and forwarding packets over a switching network using time information. The network switches maintain a common time reference, which is obtained either from an external source (such as GPS—Global Positioning System) or is generated and distributed internally. The time intervals are arranged in simple periodicity and complex periodicity (like seconds and minutes of a clock). A data packet that arrives to an input port is switched to an output port based on its order or time position in the time interval in which it arrives at the switch. The time interval duration can be longer than the time duration required for transmitting a data packet, in which case the exact position of a data packet in its forwarding time interval is predetermined. This invention provides congestion-free data packet switching for data packets for which capacity in their corresponding forwarding links and time intervals is reserved in advance. Furthermore, such data packets reach their destination, which can be one or more (i.e., multicast) in predefined time intervals, which guarantees that the delay jitter is smaller than or equal to one time interval.

Patent
31 Dec 2001
TL;DR: In this article, a tag module establishes a classification tag for a packet based on the packet's content by matching the tag with the regular expression and subexpressions of the packet.
Abstract: Packets are classified by content across a packet flow by sequencing packets according to packet flows through a content engine. A sequencer tracks packet flows, sending and buffering out-of-order packets to have missing packets resent. A regular expression engine determines matches of regular expressions and subexpressions with regular expressions encoded as non-deterministic finite automata with field programmable gate arrays and subexpression matches computed with a hash and determined by a hash look-up table. A tag module establishes a classification tag for a packet based on the packet's content by matching the tag with the regular expression and subexpressions of the packet.

Patent
07 Nov 2001
TL;DR: The storage link architecture as mentioned in this paper is a serial communications architecture for communicating between hosts and data store devices, which is specially adapted to support communications between multiple hosts and storage devices via a switching network, such as a storage area network.
Abstract: A serial communications architecture for communicating between hosts and data store devices. The Storage Link architecture is specially adapted to support communications between multiple hosts and storage devices via a switching network, such as a storage area network. The Storage Link architecture specifies various communications techniques that can be combined to reduce the overall cost and increase the overall performance of communications. The Storage Link architecture may provide packet ordering based on packet type, dynamic segmentation of packets, asymmetric packet ordering, packet nesting, variable-sized packet headers, and use of out-of-band symbols to transmit control information as described below in more detail. The Storage Link architecture may also specify encoding techniques to optimize transitions and to ensure DC-balance.