scispace - formally typeset
Search or ask a question

Showing papers on "Fast packet switching published in 1999"


Journal ArticleDOI
TL;DR: It is found that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected and that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.
Abstract: It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.

434 citations


Journal ArticleDOI
TL;DR: A photonic packet switching testbed is detailed which will allow the ideas developed within WASPNET to be tested in practice, permitting the practical problems of their implementation to be determined.
Abstract: WASPNET is an EPSRC-funded collaboration between three British Universities: the University of Strathclyde, Essex University, and Bristol University, supported by a number of industrial institutions. The project which is investigating a novel packet-based optical WDM transport network-involves determining the management, systems, and devices ramifications of a new network control scheme, SCWP, which is flexible and simplifies optical hardware requirements. The principal objective of the project is to understand the advantages and potential of optical packet switching compared to the conventional electronic approach. Several schemes for packet header implementation are described, using subcarrier multiplexing, separate wave lengths, and serial transmission. A novel node design is introduced, based on wavelength router devices, which reduce loss, hence reducing booster amplifier gain and concomitant ASE noise. The fabrication of these devices, and also wavelength converters, are described. A photonic packet switching testbed is detailed which will allow the ideas developed within WASPNET to be tested in practice, permitting the practical problems of their implementation to be determined.

294 citations


Patent
08 Feb 1999
TL;DR: In this paper, a high-speed rule processing method for packet filtering is presented, where the rules are divided into N orthogonal dimensions that comprise aspects of each packet that may be examined and tested, each of the dimensions are then divided into a set of dimension rule ranges.
Abstract: As Internet packet flow increases, the demand for high speed packet filtering has grown. The present invention introduces a high-speed rule processing method that may be used for packet filtering. The method pre-processes a set of packet filtering rules such that the rules may be searched in parallel by a set of independent search units. Specifically, the rules are divided into N orthogonal dimensions that comprise aspects of each packet that may be examined and tested. Each of the N dimensions are then divided into a set of dimension rule ranges. Each rule range is assigned a value that specifies the rules that may apply in that range. The rule preprocessing is completed by creating a search structure to be used for classifying a packet into one of the rule ranges in each of the N dimensions. Each search structure may be used by an independent search unit such that all N dimensions may be searched concurrently. The packet processing method of the present invention activates the N independent search units to search the N pre-processor created search structures. The output of each of the N search structures is then logically combined to select a rule to be applied.

181 citations


Patent
Keijo Laiho1
07 Jun 1999
TL;DR: In this paper, it was shown that an apparatus in a wireless telecommunications network is provided with Short Message Service via a circuit switched channel unless the apparatus is operating in a packet mode.
Abstract: An apparatus in a wireless telecommunications network is provided with Short Message Service via a circuit switched channel unless the apparatus is operating in a packet mode. If the apparatus is operating in the packet mode, the apparatus is provided with Short Message Service via a packet channel.

161 citations


Patent
29 Apr 1999
TL;DR: In this article, a credit bucket algorithm is used to ensure that packet flows are within specified bandwidth consumption limits, by stripping the layer 2 header information from the packet and storing a linked list of table entries that includes the fields necessary to implement the credit bucket.
Abstract: A method and apparatus for controlling the flow of variable-length packets to a multiport switch involve accessing forwarding information in a memory (214) based at least partially on layer 4 information from a packet and then forwarding (186) the packet only if the packet is within a bandwidth consumption limit that is specified in the forwarding information. In a preferred embodiment, a credit bucket algorithm is used to ensure that packet flows are within specified bandwidth consumption limits. The preferred method for implementing the credit bucket algorithm to control flows of packets involves first receiving a particular packet from a flow and then stripping the layer 2 header information from the packet. The layer 3 and layer 4 information from the packet is then used to look-up (170) flow-specific forwarding and flow control information in a memory that stores a linked list of table entries that includes the fields necessary to implement the credit bucket algorithm. The credit bucket algorithm is implemented in embedded devices within an application-specific integrated circuit, allowing the control of packet flows based on the application of the flow.

157 citations


Patent
29 Sep 1999
TL;DR: In this paper, a programmable network element (400) operates on packet traffic flowing through the element in accordance with a gateway program (404, 405, 406) which is dynamically uploaded into the network element or unloaded from it via a mechanism separate from the actual packet traffic as the element operates.
Abstract: A programmable network element (400) operates on packet traffic flowing through the element in accordance with a gateway program (404, 405, 406) which is dynamically uploaded into the network element or unloaded from it via a mechanism separate from the actual packet traffic as the element operates. Such programmable network element can simultaneously operate on plural packet flows with different or the same programs being applied to each flow. A dispatcher (402) provides a packet filter (403) with a set of rules provided by one or more of the dynamically loaded and invoked programs. These rules define, for each program, the characteristics of those packets flowing through the network element that are to be operated upon in some manner. A packet that flows from the network through the filter and satisfies one or more of such rules is sent by the packet filter to the dispatcher. The dispatcher, in accordance with one of the programs, either sends the packet to the program for manipulation by the program itself, or manipulates the packet itself in a manner instructed by the program. The processed packet is sent back through the filter to the network for routing to its destination.

141 citations


Patent
26 Feb 1999
TL;DR: In this paper, a network interface is presented that receives packet data from a shared medium and accomplishes the signal processing required to convert the data packet to host computer formatted data separately from receiving the data packets.
Abstract: A network interface is presented that receives packet data from a shared medium and accomplishes the signal processing required to convert the data packet to host computer formatted data separately from receiving the data packet. The network interface receives the data packet, converts the analog signal to a digitized signal, and stores the resulting sample packet in a storage queue. An off-line processor, which may be the host computer itself, performs the signal processing required to interpret the sample packet. In transmission, the off-line process converts host-formatted data to a digitized version of a transmission data packet and stores that in a transmission queue. A transmitter converts the transmission data packet format and transmits the data to the shared medium.

140 citations


Patent
13 Jul 1999
TL;DR: In this paper, a method and apparatus for minimizing overhead in packet re-transmission in a communication system is presented, where each packet is given a sequence number, based on a current transmission rate, the size of the packet, and a previously assigned sequence number.
Abstract: A method and apparatus are provided for minimizing overhead in packet re-transmission in a communication system. Each packet is given a sequence number, based on a current transmission rate, the size of the packet, and a previously assigned sequence number. The packet size can is adapted so that the entire packet fits into a single transmission block. The packet size may also be adapted based on throughput. The packet size may be adapted based on the transmission rate and/or throughput, whether the packet is being transmitted the first time or if it is being re-transmitted. Alternately, if the packet is being re-transmitted, the packet is transmitted at its original transmission rate, regardless of the current transmission rate.

128 citations


Patent
09 Mar 1999
TL;DR: In this article, a packetization method and packet structure is proposed to improve the robustness of a bitstream generated when a still image is decomposed with a wavelet transform.
Abstract: A packetization method and packet structure improve the robustness of a bitstream generated when a still image is decomposed with a wavelet transform. The wavelet coefficients of one “texture unit” are scanned and coded in accordance with a chosen scanning method to produce a bitstream. The bitstreams for an integral number of texture units are assembled into a packet, each of which includes a packet header. Each packet header includes a resynchronization marker to enable a decoder to resynchronize with the bitstream if synchronization is lost, and an index number which absolutely identifies one of the texture units in the packet to enable a decoder to associate following packets with their correct position in the wavelet transform domain. The header information enables a channel error to be localized to a particular packet, preventing the effects of the error from propagating beyond packet boundaries. The invention is applicable to the pending MPEG-4 and JPEG-2000 image compression standards.

115 citations


Patent
15 Jul 1999
TL;DR: In this paper, a method, apparatus and software program is provided for scheduling and admission controlling of real-time data packet traffic and a delivery deadline is determined for each payload data packet at the packet scheduler and packets are sorted into a time-stamp based queue.
Abstract: A method, apparatus and software program is provided for scheduling and admission controlling of real-time data packet traffic. Data packets are admitted or rejected for real-time processing according to throughput capabilities of a packet scheduler. A delivery deadline is determined for each payload data packet at the packet scheduler and packets are sorted into a time-stamp-based queue. Deadline violations are monitored and an adaptation of payload data packets can be triggered on demand in order to enter a stable state.

111 citations


Proceedings ArticleDOI
29 Mar 1999
TL;DR: An algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase is presented, finding that for an exponential packet loss model, good image quality can be obtained, even when 40% of transmitted packets are lost.
Abstract: We present an algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase. We use the SPIHT coder to compress images in this work, but our algorithm can protect any progressive compression scheme. The algorithm can also use almost any function as a model of packet loss conditions. We find that for an exponential packet loss model with a mean of 20% and a total rate of 0.2 bpp, good image quality can be obtained, even when 40% of transmitted packets are lost.

Patent
06 Jan 1999
TL;DR: In this article, the authors present a traffic circle architecture for routing packet data and converting between different packet formats, which includes a data bus configured in a ring or circle, and a plurality of port adapters or protocol processors coupled to the ring data bus or communication circle.
Abstract: A communication system which includes more efficient packet conversion and routing for improved performance and simplified operation. The communication system includes one or more inputs for receiving packet data and one or more outputs for providing packet data. In one embodiment, the present invention comprises a "traffic circle" architecture for routing packet data and converting between different packet formats. In this embodiment, the system includes a data bus configured in a ring or circle. A plurality of port adapters or protocol processors are coupled to the ring data bus or communication circle. Each of the port adapters are configurable for converting between different types of communication packet formats. In the preferred embodiment, each of the port adapters are operable to convert between one or more communication packet formats to/from a generic packet format. The common generic packet format is then provided on the circular bus for circulation on the communication traffic circle between respective ones of the port adapters. In a second embodiment, the present invention comprises a cross-bar switch communication channel. This system is designed to receive a plurality of communications channels comprising packet data. The communication system comprises a plurality of protocol converters or protocol processors for converting possibly differing communication protocols or differing packet formats to/from a common generic packet format. Each of the protocol converters are coupled to a single-sided cross-bar switch to transmit/receive data to/from other protocol converters. The single-sided cross-bar switch is operable for interconnecting the multiple communications paths between arbitrary pairs of communications ports. The system preferably includes arbitration and control logic for establishing and removing connection paths within the cross-bar switch. In the preferred embodiment, the single-sided cross-bar switch is configurable for different transmission paths for added flexibility.

Patent
09 Apr 1999
TL;DR: In this article, a process and system for switching connections of data packet flows between nodes of data processing system networks operating on diverse protocols according to the application layer information on the data packets is presented.
Abstract: A process and system for switching connections of data packet flows between nodes of data processing system networks operating on diverse protocols according to the application layer information on the data packets. The process retrieves and hashes the header information to from an index into memory where a flow tag pointer is stored. The flow tag points to flow switching information that directs the forwarding of the packet. The switching information is sent along with the packet data to direct the forwarding state information about the flow is updated in the flow switching information. The hash function includes a multiplication and division by polynomials forming a hash result and a signature result. Both hash and signature are used to ensure that the information retrieved is valid. If invalid, the prehashed header information is parsed to determine the forwarding information. This forwarding information is stored for later use and the appropriate flow tag pointer is stored in the hash result index.

Journal ArticleDOI
TL;DR: This article presents a new proposal for TCP-IP backbone implementation based on optical packet switching technology that merges the flexibility in resource management of packet switching with the high capacity offered by full optical technology.
Abstract: This article presents a new proposal for TCP-IP backbone implementation based on optical packet switching technology. The proposed network architecture merges the flexibility in resource management of packet switching with the high capacity offered by full optical technology.

Patent
30 Jul 1999
TL;DR: In this article, the intermediate point assigns an intermediate point sequence number to the packet and a copy of the packet is retained in a buffer until receiving an acknowledgment from the next delivery point.
Abstract: Method for transmitting data from a source (3) to a destination (4) node including an intermediary point and packet sequence number. A copy of the packet is stored in a buffer at the source node until receiving an acknowledgment that it was successfully received by the intermediary point. The intermediate point assigns an intermediate point sequence number to the packet and a copy of the packet is retained in a buffer until receiving an acknowledgment from the next delivery point. The packet in the buffer is de-allocated once it is successfully received. Upon receipt of an error indication, each packet is retransmitted. At the receiving end, all received packets following the packet associated with the error indication are dropped until successfully receiving a retransmitted version of the packet. A single negative acknowledgment indicates that the packet associated with the negative acknowledgment includes at least one error and to simultaneously indicate that all previous packets received prior to the packet associated with the negative acknowledgment were received correctly. An independent link sequence number is assigned to each packet before transmitting.

Patent
07 May 1999
TL;DR: In this paper, the authors propose a method and apparatus to limit the throughput rate of non-adapting aggressive flows on a packet-by-packet basis, based on a subset of the packet's header data, giving an approximation of perflow management.
Abstract: A method and apparatus to limit the throughput rate of non-adapting aggressive flows on a packet-by-packet basis. Each packet of an input flow is mapped to an entry in a flow table for each output queue. The mapping is based on a subset of the packet's header data, giving an approximation of per-flow management. Each entry contains a credit value. On packet reception, the credit value is compared to zero; if there are no credits, the packet is dropped. Otherwise, the size of the packet is compared to the credit value. If sufficient credits exist (i.e., size is less than or equal to credits), the credit value is decremented by the size of the packet in cells and the processing proceeds according to conventional methods, including but not limited to those disclosed in the co-pending DBL Application, incorporated herewith by reference in its entirety. If, however, the size of the packet exceeds the available credits, the credit value is set to zero and the packet is dropped. A periodic task adds credits to each flow table entry up to a predetermined maximum. The processing rate of each approximated flow is thus maintained to the rate determined by the number of credits present at each enqueuing decision, up to the allowed maximum. The scheme operates independently of packet flow type, providing packet-specific means for rapidly discriminating well-behaved flows that adapt to congestion situations signaled by packet drop from aggressive, non-adapting flows and managing throughput bandwidth accordingly. Bandwidth is shared fairly among well-behaved flows, large and small, and time-critical (low latency) flows, thereby protecting all from non-adapting aggressive flows.

Patent
Sudhir Dixit1, Nasir Ghani1
12 Oct 1999
TL;DR: In this paper, a method and apparatus for coupling IP ECN with ATM congestion control is disclosed, which includes using AAL5 packet trailers in ATM cells to detect packet boundaries for identifying a first cell in an IP packet, determining whether an ATM cell capable of using explicit congestion notification to indicate congestion and setting an explicit congestion indicator in a capable ATM cell to indicated a congestion to a source node.
Abstract: A method and apparatus for coupling IP ECN with ATM congestion control is disclosed. The invention extends IP-ECN to ATM devices with minimal implementation complexity. Thus, the performance of IP data traffic over ATM is enhanced without requiring packet-reconstruction at the ATM layer. The method includes using AAL5 packet trailers in ATM cells to detect packet boundaries for identifying a first cell in an IP packet, determining whether an ATM cell capable of using explicit congestion notification to indicate congestion and setting an explicit congestion notification indicator in a capable ATM cell to indicated a congestion to a source node. The use of the packet trailers further comprises monitoring a flag for indicating whether an ATM cell is an end of packet. The method further includes resetting the end of packet flag to an off state so the next cell is recognized as a first cell of a packet and transmitting the ATM cell (880). A determination is made as to whether the next ATM cell is a first ATM cell for a packet. The next ATM cell is transmitted when the ATM cell is not a first ATM cell for a packet. A determination is then made as to whether ATM congestion is associated with the next ATM cell. The next ATM cell is then transmitted when congestion is not associated with the next ATM cell.

Patent
06 May 1999
TL;DR: In this paper, conditions at the receiver which may be different to those at the source can now be taken into account in the flow control, which can reduce control loop delays caused by waiting at source for a number of acknowledgments to arrive before the congestion level can be calculated.
Abstract: In a packet network, on receiving a packet a receiving host determines if the packet has been marked by any of the nodes through which it passed, to indicate congestion at that node, e.g. by checking the CE bit in an IP header. A packet flow control parameter is generated at the receiving side, and sent to the source using an Internet Protocol, as part of the packet acknowledgment, to control the flow of packets from the source, according to the packet flow control parameter. This can reduce control loop delays caused by waiting at the source for a number of acknowledgments to arrive before the congestion level can be calculated. Conditions at the receiver which may be different to those at the source can now be taken into account in the flow control.

Journal ArticleDOI
TL;DR: The dependence of the efficiency of hybrid type-II ARQ schemes on the packet size in the context of a simple packet combining scheme is discussed and a very simple method of estimating the channel BER is provided.
Abstract: The dependence of the efficiency of hybrid type-II ARQ schemes on the packet size in the context of a simple packet combining scheme is discussed. A simple algorithm for adopting the optimum packet size according to the channel bit error rate (BER) is presented. Also, a very simple method of estimating the channel BER is provided.

Patent
Amr Gaber Sabaa1, Ibrahim El-Etr1
25 Jun 1999
TL;DR: In this paper, the authors propose a scalable high capacity switch architecture with a plurality of switching controllers in a switching system, where each switching controller has provisions for receiving a data packet, determining a destination of the data packet and determining whether any other switching controller in the system intends to transmit to the determined destination.
Abstract: A method, apparatus and system for a scalable high capacity switch architecture involves a switching controller for use with a plurality of switching controllers in a switching system. Each switching controller has provisions for receiving a data packet, provisions for determining a destination of the data packet, provisions for determining whether any other switching controller in the system intends to transmit to the determined destination of the data packet and provisions for transmitting the received data packet to the determined destination when no other switching controllers intend to transmit to the determined destination.

Patent
14 Oct 1999
TL;DR: In this paper, the authors describe a method for transmitting data packets over a packet switching network with widely varying link speeds, where each switch along a route from a source to a destination forwards data packets in periodic time frames (TFs) of a plurality of durations that are predefined using the CTR.
Abstract: The invention describes a method for transmitting data packets over a packet switching network with widely varying link speeds. The switches of the network maintain a common time reference (CTR). Each switch along a route from a source to a destination forwards data packets in periodic time frames (TFs) of a plurality of durations that are predefined using the CTR. The time frame duration can be longer than the time duration required for transmitting a data packet, in which case the exact position of a packet in the time frame is not predetermined. In accordance with the present invention, different time frame durations: TF1, TF2, and so on are used for forwarding over links with different capacities. This invention further describes a method for transmitting and forwarding data packets over a packet switching and shared media networks. The shared media network can be of various types, including but not limited to: IEEE P1394 and Ethernet for desktop computers and room area networks, cable modem head-end (e.g., DOCSIS, IEEE 802.14), wireless base-station (e.g., IEEE 802.11), and Storage Area Network (SAN) (e.g., FC-AL, SSA). The invention further describes a method for interfacing a packet-switched network with real-time streams from various sources, such as circuit-switched telephony network sources. A data packet that is packetized at the gateway is scheduled to be forwarded to the network in a predefined time that is responsive to the common time reference. The invention relates, in particular, to timely forwarding and delivery of data packet between voice over IP (VoIP) gateways. Consequently, the invention provides a routing service between any two VoIP gateways where the end-to-end performance parameters, such as loss, delay and jitter, have deterministic guarantees. Furthermore, the invention enables gateway functions with minimum delay.

Journal ArticleDOI
TL;DR: The application of a type-II hybrid ARQ protocol in a slotted direct-sequence spread-spectrum multiple-access (DS-SSMA) packet radio system is investigated and it is shown that for each fixed input load, there is an optimal retransmission probability under the finite user population assumption.
Abstract: The application of a type-II hybrid ARQ protocol in a slotted direct-sequence spread-spectrum multiple-access (DS-SSMA) packet radio system is investigated. Both the static performance and the dynamic performance of such a system are analyzed. In the physical layer, packet error and packet success probabilities are computed using the improved Gaussian approximation technique, which accounts for the bit-to-bit error dependence within a packet. In the data-link layer, two-dimensional Markov chains are employed to model the system dynamics. Based on this model, the performance of the type-II hybrid ARQ protocol is upper and lower bounded by considering, respectively, a superior scheme and an inferior scheme. Steady state throughput and delay performances of the two bounding schemes are obtained. Moreover, it is shown that for each fixed input load, there is an optimal retransmission probability under the finite user population assumption. Bounds on this optimal retransmission probability are also given.

Patent
Takeshi Saito1, Keiji Tsunoda1, Eiji Kamagata1, Noriyasu Kato1, Ichiro Tomoda1, Hirokazu Tanaka1 
03 Sep 1999
TL;DR: In this article, a packet to be transmitted is divided into segments to form a plurality of packet segments, and an original packet is formed from the plurality of processed packet segments and the processed packet segment is transmitted to the network.
Abstract: A communication node which can reliably transfer a packet with a payload including data having error resistance in radio environments is disclosed. A packet to be transmitted is divided into segments to form a plurality of packet segments. From among a plurality of error correction schemes that have been prepared in advance, the scheme to be employed is selected for each of the packet segments in accordance with predetermined criteria, and the selected error correction scheme is applied to each packet segment. Subsequently, the processed packet segment is transmitted to the network. Packet segments are received from the network. From among a plurality of error correction schemes prepared in advance, the scheme to be employed is selected for each of the received packet segments based on predetermined information contained in each received packet segment, and the selected error correction scheme is applied to the received packet segment. An original packet is formed from the plurality of processed packet segments.

Patent
24 Jun 1999
TL;DR: In this article, the authors present a protocol for using the remote access server in a packet network, where the packet switch fabric transfers packet-based signals among the packet network server, and the dial access server includes a second digital signal processor for performing signal processing on the packetbased signals.
Abstract: A remote access server and method for using the remote access server in a packet network. In one embodiment, the remote access server includes a packet switch fabric, a packet network server and a dial access server. The packet network server has a first port for sending and receiving packet-based signals with the packet switch fabric and a second port for sending and receiving packet-based signals with the packet network. The dial access server has a port for sending and receiving packet-based signals with the packet switch fabric and the dial access server has a first digital signal processor for performing signal processing on the packet-based signals. The packet switch fabric transfers packet-based signals among the packet network server, and the dial access server. In a further embodiment, the dial access server includes a second digital signal processor for performing signal processing on the packet-based signals. The first digital signal processor may be a channel signal processor and the second digital signal processor may be a packet protocol processor. The signal processors perform remote access signal processing. The packet protocol processor may perform dial-up Internet protocol support. The channel signal processor may perform modulation and demodulation of packet-based signals, transcoding of packet-based signals, and automatic modem adaptation.

Patent
15 Dec 1999
TL;DR: In this article, a packet switch and a packet switching method capable of taking full advantage of the transfer capability of the packet switch by avoiding the influence due to the congestion are disclosed.
Abstract: A packet switch and a packet switching method capable of taking the full advantage of the transfer capability of the packet switch by avoiding the influence due to the congestion are disclosed. In the packet switch, the priority level according to the congestion status of the transfer target is attached to a packet, and the processing at a time of packet collision is carried out by accounting for this priority level, so that it becomes possible to carry out the packet transfer control according to the congestion status of the transfer target of each packet.

Proceedings ArticleDOI
C. Benecke1
06 Dec 1999
TL;DR: The paper demonstrates why security issues related to the continually increasing bandwidth of high speed networks (HSN) cannot be addressed with conventional firewall mechanisms and shows how hardware may be utilized to distribute the network load among such parallel packet screens.
Abstract: The paper demonstrates why security issues related to the continually increasing bandwidth of high speed networks (HSN) cannot be addressed with conventional firewall mechanisms. A single packet screen running on a fast computer is not capable of filtering all packets traversing a Fast/Gigabit Ethernet. This problem can be addressed by using parallel processing methods to implement a fast, scalable packet screen for Ethernets. The paper shows how hardware may be utilized to distribute the network load among such parallel packet screens. Empirical results using 'off-the-shelf' equipment indicate that this approach is usable.

Patent
Prabhas Kejriwal1, Chi Fai Ho1
14 Oct 1999
TL;DR: In this article, a method is described that involves presenting packet header information from a packet and packet size information for the packet to a pipeline that comprises multiple stages, where one of the stages identifies where input flow information is located and the input capacity information is then fetched.
Abstract: A method is described that involves presenting packet header information from a packet and packet size information for the packet to a pipeline that comprises multiple stages. One of the stages identifies, with the packet header information, where input flow information for the packet is located. The input flow information is then fetched. The input flow information identifies where input capacity information for the packet is located and the input capacity information is then fetched. Another of the stages compares an input capacity for the packet with the packet's size and indicates whether the packet is conforming or non-conforming based upon the comparison. The input capacity is calculated from the input capacity information.

Patent
07 Sep 1999
TL;DR: In this article, a plurality of input bit streams are connected to a telecommunications switching network fabric element comprising a microprocessor system including memory, which performs switching and protocol conversion functions on the input streams in order to generate output streams.
Abstract: Apparatus, and a method for flexibly switching telecommunication signals. A plurality of input bit streams are connected to a telecommunications switching network fabric element comprising a microprocessor system including memory. The microprocessor system, under program control, performs switching and protocol conversion functions on the input streams in order to generate output streams. Advantageously, a single element is able to concurrently switch input signals in a variety of protocols, including circuit switching protocols, such as Pulse Code Modulation (PCM), and packet switching protocols, such as Asynchronous Transfer Mode (ATM) protocols and Internet Protocol, (IP). Where desirable, the microprocessor system can also control the protocol conversion of input signals in one protocol to output signals in another protocol.

Journal ArticleDOI
TL;DR: This paper exploits existing techniques to design CNNs with a prescribed set of stable binary equilibrium points as a basic tool to suppress spurious responses and, hence, to optimize the neural switching fabric performance.
Abstract: In this paper we discuss the design of a cellular neural network (CNN) to solve a class of optimization problems of importance for communication networks The CNN optimization capabilities are exploited to implement an efficient cell scheduling algorithm in a fast packet switching fabric The neural-based switching fabric maximizes the cell throughput and, at the same time, it is able to meet a variety of quality of service (QoS) requirements by optimizing a suitable function of the switching delay and priority of the cells We also show that the CNN approach has advantages with respect to that based on Hopfield neural networks (HNNs) to solve the considered class of optimization problems In particular, we exploit existing techniques to design CNNs with a prescribed set of stable binary equilibrium points as a basic tool to suppress spurious responses and, hence to optimize the neural switching fabric performance

Patent
30 Jun 1999
TL;DR: In this paper, a method and apparatus for transmitting image data over a packet-based bus was proposed, which essentially supports multiple pipes within a single packet, thus utilizing a single set of pipe interface hardware.
Abstract: A method and apparatus for transmitting image data over a packet-based bus. The invention essentially supports multiple pipes within a single packet, thus utilizing a single set of pipe interface hardware. Each of the pipes within a packet has it's own header, and any particular packet can have 0, 1, 2 or more headers.