scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2000"


Journal ArticleDOI
28 Aug 2000
TL;DR: A deterministic model of packet delay is described and used to derive both the packet pair property of FIFO-queueing networks and a new technique packet tailgating ) for actively measuring link bandwidths.
Abstract: We describe a deterministic model of packet delay and use it to derive both the packet pair [2] property of FIFO-queueing networks and a new technique packet tailgating) for actively measuring link bandwidths. Compared to previously known techniques, packet tailgating usually consumes less network bandwidth, does not rely on consistent behavior of routers handling ICMP packets, and does not rely on timely delivery of acknowledgments.

428 citations


01 Jan 2000
TL;DR: The main features of optical burst switching (OBS) are identified, major differences and similarities between OBS and optical circuitand packet-switching are discussed, and important QoS related performance issues in OBS are addressed.
Abstract: In this article, we first explore design choices in burst-switching and describe a new variation that is especially suitable for optical WDM networks. We then identify main features of optical burst switching (OBS), discuss major differences and similarities between OBS and optical circuitand packet-switching, and address important QoS related performance issues in OBS.

224 citations


Patent
10 Aug 2000
TL;DR: In this article, a forward error correction (FEC) technique is proposed for interactive video transmission, which is based on the recovery from error spread using continuous updates (RESCU).
Abstract: Real-time interactive video transmission in the current Internet has mediocre quality because of high packet loss rates. Loss of packets belonging to a video frame is evident not only in the reduced quality of that frame but also in the propagation of that distortion to successive frames. This error propagation problem is inherent in any motion-based video codec because of the interdependence of encoded video frames. Since packet losses in the best-effort Internet environment cannot be prevented, minimizing the impact of these packet losses to the final video quality is important. A new forward error correction (FEC) technique effectively alleviates error propagation in the transmission of interactive video. The technique is based on a recently developed error recovery scheme called Recovery from Error Spread using Continuous Updates (RESCU). RESCU allows transport level recovery techniques previously known to be infeasible for interactive video transmission applications to be successfully used in such applications. The FEC technique can be very useful when the feedback channel from the receiver is highly limited, or transmission delay is high. Both simulation and Internet experiments indicate that the FEC technique effectively alleviates the error spread problem and is able to sustain much better video quality than H.261 or other conventional FEC schemes under various packet loss rates.

201 citations


Journal ArticleDOI
TL;DR: This paper describes a framework for admission control for a packet-based network where the decisions are taken by edge devices or end-systems, rather than resources within the network, and allows networks to be explicitly analyzed, and consequently engineered.
Abstract: This paper describes a framework for admission control for a packet-based network where the decisions are taken by edge devices or end-systems, rather than resources within the network. The decisions are based on the results of probe packets that the end-systems send through the network, and require only that resources apply a mark to packets in a way that is load dependent. One application example is the Internet, where marking information is fed back via an ECN bit, and we show how this approach allows a rich QoS framework for flows or streams. Our approach allows networks to be explicitly analyzed, and consequently engineered.

195 citations


Patent
20 Sep 2000
TL;DR: In this paper, a call discriminator is used to selectively enable voice exchange and data exchange in a signal processing system which discriminates between voice signals and data signals modulated by a voiceband carrier on the switched circuit network with unmodulated data signal packets on the packet based network.
Abstract: A signal processing system which discriminates between voice signals and data signals modulated by a voiceband carrier. The signal processing system includes a voice exchange, a data exchange and a call discriminator. The voice exchange is capable of exchanging voice signals between a switched circuit network and a packet based network. The signal processing system also includes a data exchange capable of exchanging data signals modulated by a voiceband carrier on the switched circuit network with unmodulated data signal packets on the packet based network. The data exchange is performed by demodulating data signals from the switched circuit network for transmission on the packet based network, and modulating data signal packets from the packet based network for transmission on the switched circuit network. The call discriminator is used to selectively enable the voice exchange and data exchange.

170 citations


Patent
Kenny C. Kwan1
29 Aug 2000
TL;DR: In this paper, a call discriminator is used to selectively enable voice exchange and data exchange in a signal processing system which discriminates between voice signals and data signals modulated by a voiceband carrier on the switched circuit network with unmodulated data signal packets on the packet based network.
Abstract: A signal processing system which discriminates between voice signals and data signals modulated by a voiceband carrier. The signal processing system includes a voice exchange, a data exchange and a call discriminator. The voice exchange is capable of exchanging voice signals between a switched circuit network and a packet based network. The signal processing system also includes a data exchange capable of exchanging data signals modulated by a voiceband carrier on the switched circuit network with unmodulated data signal packets on the packet based network. The data exchange is performed by demodulating data signals from the switched circuit network for transmission on the packet based network, and modulating data signal packets from the packet based network for transmission on the switched circuit network. The call discriminator is used to selectively enable the voice exchange and data exchange.

160 citations


Patent
22 Dec 2000
TL;DR: In this article, the authors propose an apparatus for switching data packet flows by assigning schedules to guaranteed delay and bandwidth traffic, where the switches (5, 6, 8, and 9) are given a schedule by the scheduling server.
Abstract: An apparatus (Fig.3) for switching data packet flows by assigning schedules to guaranteed delay and bandwidth traffic. The network switches (5, 6, 8, and 9) are given a schedule by the scheduling server (7). The scheduling server (7) computes the schedule information and transmits the schedule information to the switches (5, 6, 8, and 9) and the transmitting source and destination terminals (4 and 13). The switches (5, 6, 8, and 9) manage the real-time transmissions and receptions of packets based on the schedule information.

153 citations


Patent
25 Feb 2000
TL;DR: In this paper, a method and system for managing transmission resources in a wireless communications network includes receiving a packet and determining a time duration for transmission of the packet, and a power level for transmission over the time duration is further determined.
Abstract: A method and system for managing transmission resources in a wireless communications network includes receiving a packet and determining a time duration for transmission of the packet. A power level for transmission of the packet over the time duration is further determined. Based on the time duration and the power level, a wireless resource impact is determined for the packet. Transmission resources are allocated based on the wireless resource impact.

151 citations


Journal ArticleDOI
TL;DR: This study aims at showing that having a global common time reference, together with time-driven priority (TDP) and VBR MPEG video encoding, provides adequate end-to-end delay, which is below 10 ms; independent of the network instant load; andindependent of the connection rate.
Abstract: Videoconferencing is an important global application-it enables people around the globe to interact when distance separates them. In order for the participants in a videoconference call to interact naturally, the end-to-end delay should be below human perception; even though an objective and unique figure cannot be set, 100 ms is widely recognized as the desired one-way delay requirement for interaction. Since the global propagation delay can be about 100 ms, the actual end-to-end delay budget available to the system designer (excluding propagation delay) can be no more than 10 ms. We identify the components of the end-to-end delay in various configurations with the objective of understanding how it can be kept below the desired 10-ms bound. We analyze these components step-by-step through six system configurations obtained by combining three generic network architectures with two video encoding schemes. We study the transmission of raw video and variable bit rate (VBR) MPEG video encoding over (1) circuit switching; (2) synchronous packet switching; and (3) asynchronous packet switching. In addition, we show that constant bit rate (CBR) MPEG encoding delivers unacceptable delay-on the order of the group of pictures (GOP) time interval-when maximizing the quality for static scenes. This study aims at showing that having a global common time reference, together with time-driven priority (TDP) and VBR MPEG video encoding, provides adequate end-to-end delay, which is (1) below 10 ms; (2) independent of the network instant load; and (3) independent of the connection rate. The resulting end-to-end delay (excluding propagation delay) can be smaller than the video frame period, which is better than what can be obtained with circuit switching.

148 citations


Patent
29 Feb 2000
TL;DR: In this paper, a system and method for transferring a packet received from a network to a host computer is described, where a flow key is generated to identify a communication flow that comprises the packet.
Abstract: A system and method are provided for transferring a packet received from a network to a host computer. A flow key is generated to identify a communication flow that comprises the packet. A code is generated to indicate how the packet should be transferred to host memory. Based on the code, a transfer engine stores the packet in one or more host buffers. If the packet conforms to a predetermined protocol, its data is added to a re-assembly buffer with data from other packets in the same flow and its header portion is stored in a header buffer. Otherwise, the packet is stored in the header buffer if it is smaller than a predetermined threshold or, if larger than the threshold, it is stored in another buffer. After a packet is stored, the transfer engine configures a descriptor with information concerning the packet and releases the descriptor to the host.

147 citations


Journal ArticleDOI
TL;DR: An ultra-low latency, high throughput Internet protocol (IP) over wavelength division multiplexing (WDM) packet switching technology for next-generation Internet (NGI) applications has been designed and demonstrated.
Abstract: An ultra-low latency, high throughput Internet protocol (IP) over wavelength division multiplexing (WDM) packet switching technology for next-generation Internet (NGI) applications has been designed and demonstrated. This method overcomes limitations of conventional optical packet switching, which require buffering of packets and synchronization of bits, and optical burst switching methods that require estimation of delays at each node and for each path. An optical label switching technique was developed to realize flexible bandwidth-on-demand packet transport on a reconfigurable WDM network. The aim was to design a network with simplified protocol stacks, scalability, and data transparency. This network will enable the NGI users to send their data applications at gigabit/second access speed with low and predictable latency (<1 /spl mu/sec per switch node), with a system capacity of beyond multi-Tb/s. Packet forwarding utilizes WDM optical headers that are carried in-band on the same wavelength and modulated out-of-band in the frequency domain.

Journal ArticleDOI
TL;DR: The Concord algorithm is presented, which provides a delay-sensitive solution for playout buffering and the use of aging techniques are explored to improve the effectiveness of the historical information and hence, the delay predictions.
Abstract: Receiver synchronization of continuous media streams is required to deal with delay differences and variations resulting from delivery over packet networks such as the Internet. This function is commonly provided using per-stream playout buffers which introduce additional delay in order to produce a playout schedule which meets the synchronization requirements. Packets which arrive after their scheduled playout time are considered late and are discarded. In this paper, we present the Concord algorithm, which provides a delay-sensitive solution for playout buffering. It records historical information and uses it to make short-term predictions about network delay with the aim of not reacting too quickly to short-lived delay variations. This allows an application-controlled tradeoff of packet lateness against buffering delay, suitable for applications which demand low delay but can tolerate or conceal a small amount of late packets. We present a selection of results from an extensive evaluation of Concord using Internet traffic traces. We explore the use of aging techniques to improve the effectiveness of the historical information and hence, the delay predictions. The results show that Concord can produce significant reductions in buffering delay and delay variations at the expense of packet lateness values of less than 1%.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: It is theoretically possible for a PPS to emulate a FCFS (first come first served) output-queued packet-switch if each layer operates at a rate of approximately 2R/k, analogous to Clos theorem for a three-stage circuit switch to be strictly non-blocking.
Abstract: Our work is motivated by the desire to build a very high speed packet switch with extremely high line-rates. In this paper, we consider building a packet switch from multiple, lower speed packet switches operating independently and in parallel. In particular, we consider a (perhaps obvious) parallel packet switch (PPS) architecture in which arriving traffic is demultiplexed over k identical, lower speed packet-switches, switched to the correct output port, then recombined (multiplexed) before departing from the system. Essentially, the packet switch performs packet-by-packet load-balancing, or "inverse-multiplexing" over multiple independent packet switches. Each lower-speed packet switch, operates at a fraction of the line-rate, R; for example, if each packet switch operates at rate R/k no memory buffers are required to operate at the full line-rate of the system. Ideally, a PPS would share the benefits of an output-queued switch; i.e. the delay of individual packets could be precisely controlled, allowing the provision of guaranteed qualities of service. In this paper, we ask the question: Is it possible for a PPS to precisely emulate the behavior of an output-queued packet-switch with the same capacity and with the same number of ports? The main result of this paper is that it is theoretically possible for a PPS to emulate a FCFS (first come first served) output-queued packet-switch if each layer operates at a rate of approximately 2R/k. This simple result is analogous to Clos theorem for a three-stage circuit switch to be strictly non-blocking. We further show that the PPS can emulate any QoS queueing discipline if each layer operates at a rate of approximately 3R/k.

Patent
22 Dec 2000
TL;DR: The route switch packet architecture as mentioned in this paper provides external memory to a multi-thread packet processor which processes data packets using a multithreaded pipelined machine, where each instruction in the pipeline is executed for a different thread.
Abstract: External memory engine selectable pipeline architecture provides external memory to a multi-thread packet processor which processes data packets using a multi-threaded pipelined machine wherein no instruction depends on a preceding instruction because each instruction in the pipeline is executed for a different thread. The route switch packet architecture transfers a data packet from a flexible data input buffer to a packet task manager, dispatches the data packet from the packet task manager to a multi-threaded pipelined analysis machine, classifies the data packet in the analysis machine, modifies and forwards the data packet in a packet manipulator. The route switch packet architecture includes an analysis machine having multiple pipelines, wherein one pipeline is dedicated to directly manipulating individual data bits of a bit field, a packet task manager, a packet manipulator, a global access bus including a master request bus and a slave request bus separated from each other and pipelined, an external memory engine, and a hash engine.

Patent
28 Mar 2000
TL;DR: In this article, the authors propose an apparatus for and a method of dynamically prioritizing packets over a packet based network, where packets are dynamically prioritized on the basis of their "time to live" in the network as they travel from one network entity to another.
Abstract: An apparatus for and a method of dynamically prioritizing packets over a packet based network. Packets are dynamically prioritized on the basis of their ‘time to live’ in the network as they travel from one network entity to another. Packets are assigned a priority in accordance with how ‘old’ or ‘young’ they are. Packets with a relatively long time left to live are assigned lower priority then those with relatively little time left to live. A time to live (TTL) field is added to the packet as it travels from one network entity to another. The contents of the time to live (TTL) field represents how ‘young’ or ‘old’ the packet is and conveys the time left before the packet is no longer of any use. Each network entity that receives the packet with a TTL field, subtracts from it the time the packet spends passing through that entity. The field decreases as it hops from network entity to entity until it reaches its destination or is discarded.

Patent
08 Dec 2000
TL;DR: In this paper, the authors propose a method for transmitting a frame synchronized message that includes receiving a non real-time frame reference marker in a receiver, timestamping the received frame reference markers with a reception time, and subsequently receiving a control node timing differential at the receiver.
Abstract: A communication apparatus that shares precise return channel uplink timing information includes a common symbol timing reference and one or more control stations that each transmit independent asynchronous DVB data streams which evenly share the common symbol timing. The control stations each include respective delay trackers to determine broadcast transmission delays associated with the particular control station and transmission path. Each broadcast data stream includes the same non real-time frame marker and a transmission delay message particular to the respective control station. A remote receiver receives one of the broadcast streams and timestamps the non real-time frame marker with a local time of receipt. A timing recovery circuit determines an upcoming return channel frame start time by adjusting the local time of receipt by the particular broadcast transmission delay and a unique receiver offset time. A local transmitter subsequently uplinks a TDMA message in a predetermined time-slot after the return channel frame start time. The method for transmitting a frame synchronized message includes receiving a non real-time frame reference marker in a receiver, timestamping the received frame reference marker with a reception time, and subsequently receiving a control node timing differential at the receiver. The local reception time of the non real-time frame marker is corrected to determine the proper return channel frame transmit start time by applying the control node timing differential and the local offset time. Users then uplink a message during an assigned period after the return channel frame transmit start time.

Patent
29 Feb 2000
TL;DR: In this article, a system and method for identifying related packets in a communication flow for the purpose of collectively processing them on a host computer is presented, and a dynamic packet batching module searches for a packet in the same flow as the packet being transferred.
Abstract: A system and method are provided for identifying related packets in a communication flow for the purpose of collectively processing them on a host computer. A packet received at a network interface is parsed to retrieve information from a protocol header. A flow key is generated to identify a communication flow that includes the packet, and is stored in a database of flow keys. When the packet is placed in a queue to be transferred to a host computer, the flow key and/or its index in the database is stored in a separated queue. A dynamic packet batching module searches for a packet in the same flow as the packet being transferred. If a related packet is located, the host computer is alerted and delays processing the transferred packet until the related packet is also received. By collectively processing the related packets, processor time is more efficiently utilized.

01 Jan 2000
TL;DR: A new method for temporal error resilience, called Video Redundancy Coding, is discussed, which is one possible usage of one of the optional modes of ITUT's advanced video compression recommendation H.263.
Abstract: The forthcoming new version of ITUT’s advanced video compression recommendation H.263 [1] includes several optional modes for the support of packet networks. This paper gives a brief description of those modes and discusses in detail a new method for temporal error resilience, called Video Redundancy Coding, which is one possible usage of one of the optional modes. In conjunction with spatial error resilience mechanisms, Video Redundancy Coding has been proven to be a superior method for achieving high quality video transmission over non guaranteed QoS packet networks that have packet loss rates as high as 20%, with a minimal additional coding overhead.

Journal ArticleDOI
Jeong Geun Kim1, Marwan Krunz
TL;DR: It is shown that ignoring the autocorrelations in the arrival process or the time-varying nature of the channel state can lead to significant underestimation of the delay performance, particularly at high channel error rates.
Abstract: In this paper, we analyze the mean delay experienced by a Markovian source over a wireless channel with time-varying error characteristics. The wireless link implements the selective-repeat automatic repeat request (ARQ) scheme for retransmission of erroneous packets. We obtain good approximations of the total delay, which consists of transport and resequencing delays. The transport delay, in turn, consists of queueing and transmission delays. In contrast to previous studies, our analysis accommodates both the inherent correlations between packet interarrival times (i.e., traffic burstiness) and the time-varying nature of the channel error rate. The probability generating function (PGF) of the queue length under the "ideal" SR ARQ scheme is obtained and combined with the retransmission delay to obtain the mean transport delay. For the resequencing delay, the analysis is performed under the assumptions of heavy traffic and small window sizes (relative to the channel sojourn times). The inaccuracy due to these assumptions is observed to be negligible. We show that ignoring the autocorrelations in the arrival process or the time-varying nature of the channel state can lead to significant underestimation of the delay performance, particularly at high channel error rates. Some interesting effects of key system parameters on the delay performance are observed.

Patent
15 Mar 2000
TL;DR: In this paper, a method for communicating data and control from a host computer to a device is provided, which includes generating a packet at the host computer and transmitting the packet to the device.
Abstract: A method for communicating data and control from a host computer to a device is provided. The method includes generating a packet at the host computer and transmitting the packet to the device. The device responding to the packet with a handshake, and the handshake includes one of an ACK, a NACK, and an ALERT. The ACK is indicative that the packet was received without errors and a next packet in a sequence of packets can be sent to the device, the NACK is indicative that the packet was received without errors but a re-transmission should be attempted, and the ALERT is indicative of an error condition at the device and a re-transmission should not be attempted. In this example, the packet has a packet format including: (a) a synchronization field; (b) a packet type (PT) field following the synchronization field; (c) a byte count (BC) field for defining a length of data for the packet; (d) a data type (DT) field for defining whether the data is one of link control, device control, and device data; and (e) a data field.

Patent
Naoki Oguchi1
19 Apr 2000
TL;DR: In this paper, a packet processing device in which a receiving buffer free space notifying portion notifies a free space of the receiving buffer, an accumulation condition determining portion determines a size of a big packet based on the free space, and a reassembly buffer processor reassembles a plurality of receiving packets into a single big packet.
Abstract: A packet processing device in which a receiving buffer free space notifying portion notifies a free space of a receiving buffer, an accumulation condition determining portion determines a size of a big packet based on the free space, and a reassembly buffer processor reassembles a plurality of receiving packets into a single big packet to be transmitted to the receiving buffer. A backward packet inclusive information reading circuit for detecting the free space based on information within a backward packet from the upper layer may be used as the receiving buffer free space notifying portion. Also, an application layer may be used as the upper layer so that the big packet is transmitted not through a buffer of a transport layer but directly to the receiving buffer.

Patent
17 Mar 2000
TL;DR: In this article, a load balancing in a link aggregation environment, wherein the method includes the steps of determining if a packet flow in a network switch exceeds a predetermined threshold, is presented.
Abstract: A method for load balancing in a link aggregation environment, wherein the method includes the steps of determining if a packet flow in a network switch exceeds a predetermined threshold. Then the method includes the step of determining if the packet flow is a candidate for link switching from a first link to a second link if the packet flow exceeds the predetermined threshold. Additionally, the method includes switching the packet flow from the first link to the second link if the packet flow is determined to be a candidate for link switching. Additionally, a method for load balancing in a link aggregation environment including the steps of determining a length of a first frame and a length of a second frame entering the link aggregation environment. Thereafter, determining a flow rate of the first frame and the second frame entering the link aggregation environment. Then a step of determining if the flow rate exceeds a predetermined flow rate threshold is undertaken, and thereafter, a step of determining if the first frame and the second frame are candidates for link switching is completed. As a final step, the method switches a transmission link for the second frame from a first transmission link to a second transmission link.

Patent
09 Jun 2000
TL;DR: In this paper, a data packet is received over the network from a second node, the data packet including a network identifier for the second node and a Time-To-Live (TTL) field that has a value, with the value of the TTL field indicating a maximum additional number of hops that could have been made by the packet.
Abstract: Provided are techniques and apparatuses for determining the geographic location of a node on a network. In a representative embodiment, a data packet is received over the network from a second node, the data packet including a network identifier for the second node and a Time-To-Live (TTL) field that has a value, with the value of the TTL field for the data packet indicating a maximum additional number of hops that could have been made by the data packet. A probe packet addressed to the network identifier for the second node is then sent, the probe packet also including a TTL field. The initial value for the TTL field of the probe packet is set based on the value for the TTL field of the data packet.

Patent
22 Dec 2000
TL;DR: In this paper, a preventive X.25 flow control mechanism is proposed for high speed packet switching networks where calls are multiplexed on network trunks with each connection using a reserved amount of the total bandwidth.
Abstract: A preventive X.25 flow control mechanism for use in a high speed packet switching network where calls are multiplexed on network trunks with each connection using a reserved amount of the total bandwidth. X.25 data terminal equipments access the network via access nodes. Each access node includes a Leaky Bucket component which maintains a refillable token pool. Each time an incoming packet is received by the leaky bucket component, the number of available tokens is compared to two predetermined threshold values. If the number of tokens is less than the low threshold, acknowledgments of received packets are stopped, inducing an interruption of packets transmitted by the emitting attached X.25 terminals. Interrupting packet transmission will lead to a regeneration of the number of tokens in the token pool. If the number of tokens reaches the high threshold, acknowledgments are again generated to restore packet transmissions. The first and second thresholds are greater than zero, reducing the chances that a packet will be marked as being discardable.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: A core-stateless quality of service architecture that can support end-to-end delay differentiation between flows using only simple mechanisms at the core routers in the network is presented.
Abstract: We present a core-stateless quality of service architecture for achieving delay differentiation between flows. There are two key components in our approach: (1) per-class per-hop relative average delay-the average queueing delay perceived by the packets in a delay class at a link is inversely proportional to the delay weight of the class; (2) per-flow end-to-end delay class adaptation-the delay class of a flow is dynamically adjusted based on its perceived end-to-end delay in order to maintain the desired end-to-end average delay requirement of the flow. We show through simulations and analysis that these two components can, in concert, support end-to-end delay differentiation between flows using only simple mechanisms at the core routers in the network.

Patent
Alireza Abaye1, Wing F. Lo1, Ralph R. Carsten1, Bruce Robertson1, Daniel Briere1 
24 Apr 2000
TL;DR: In this article, a user can create a representation of a target communication system by using icons representing different network components to build the representation in a user interface, and the expected or predicted performance of the target communications system may be determined by using an E-model in one arrangement.
Abstract: A modeling tool that is executable in a system includes a modeling application that is capable of deriving a predicted quality of communications in a represented communications system. A user may create a representation of a target communication system by using icons representing different network components to build the representation in a user interface. Each of the network components is associated with one or more performance parameters, such as packet delay, packet loss, and jitter for packet-based network components. An overall performance parameter, such as an overall packet delay, overall packet loss, and overall jitter may be derived based on the performance parameters for each of the network components. From the overall performance parameters, the expected or predicted performance of the target communications system may be determined by using an E-model in one arrangement.

Patent
26 Oct 2000
TL;DR: In this paper, the authors propose a method and apparatus for efficiently transporting and synchronizing data between the Media Access Control (MAC) and physical communication protocol layers in a wireless communication system.
Abstract: The present invention is a novel method and apparatus for efficiently transporting and synchronizing data between the Media Access Control (MAC) and physical communication protocol layers in a wireless communication system. Depending on the length of the MAC packet to be transported, the present invention either fragments or concatenates the MAC packet when mapping to the physical layer. When a MAC packet is too long to fit in one TC/PHY packet, the MAC packet is fragmented and the resultant multiple TC/PHY packets are preferably transmitted back-to-back within the same TDD frame. When a MAC packet is shorter than a TC/PHY packet, the next MAC packet is concatenated with the current MAC packet into a single TC/PHY packet unless an exception applies (e.g., a change in CPE on the uplink or a change in modulation on the downlink). When an exception applies, the next MAC packet is started on a new TC/PHY packet following either a CTG or MTG.

Patent
12 Apr 2000
TL;DR: In this paper, an automatic repeat request (ARQ) mechanism, which includes combining of initially transmitted and retransmitted versions of a packet, is provided for retransmission of erroneous packets.
Abstract: An automatic repeat request (ARQ) mechanism (such as Type II/III hybrid ARQ) which includes (soft or hard) combining of initially transmitted and retransmitted versions of a packet, is provided for retransmission of erroneous packets. According to the present invention, in association with each retransmission, there is outband signaling (17) from a transmitter (RNC) to a receiver (UE) that unambiguously indicates when, e.g. the exact time or physical location, the first transmission of the packet was carried out, so that it is possible to combine the retransmitted version with the previous version(s) of the packet. Soft combining requires that the initial packet and the retransmitted packet be identical. In an embodiment of the invention, in order to overcome this problem, the information that needs to be changed between the initial transmission and the retransmission(s) of a packet is sent outband with other outband signaling information (17). Thus, the retransmitted packet can be maintained unchanged and the requirement for identical packets in soft combining is met.

Patent
07 Jul 2000
TL;DR: In this article, a real-time firewall/data protection system that filters data packets in real time and without packet buffering is described, where a data packet filtering hub performs rules-based filtering on several levels simultaneously with programmable logic or other hardware devices.
Abstract: Methods and systems for firewall/data protection that filters data packets in real time and without packet buffering are disclosed. A data packet filtering hub, which may be implemented as part of a switch or router, receives a packet on one link, reshapes the electrical signal, and transmits it to one or more other links. During this process, a number of filters checks are performed in parallel, resulting in a decision about whether each packet should or should not be invalidated by the time that the last bit is transmitted. To execute this task, the filtering hub performs rules-based filtering on several levels simultaneously, preferably with a programmable logic or other hardware device. Various methods for packet filtering in real time and without buffering with programmable logic are disclosed. The system may include constituent elements of a stateful packet filtering hub, such as microprocessors, controllers, and integrated circuits. The system may be reset, enabled, disabled, configured, and/or reconfigured with toggles or other physical switches. Audio and visual feedback may be provided regarding the operation and status of the system.

Patent
29 Jun 2000
TL;DR: In this article, an apparatus for distributing processing loads in a service aware network is provided, which contains a controller and a plurality of packet processors coupled to the controller, and a method performed by the apparatus and a software program for controlling the controller are also provided.
Abstract: An apparatus for distributing processing loads in a service aware network is provided. The apparatus contains a controller and a plurality of packet processors coupled to the controller. The controller receives a first data packet and determines whether or not any of the packet processors have been previously selected to process the first data packet based on a classification of the first data packet. When none of the packet processors has been previously designated to process the first data packet, the controller selects a first selected processor of the packet processors to process the first data packet. The first selected processor is selected based on processing load values respectively corresponding to processing loads of the packet processors. In addition, a method performed by the apparatus and a software program for controlling the controller are also provided.