scispace - formally typeset
Search or ask a question

Showing papers on "Chunking (computing) published in 2017"


Posted Content
TL;DR: This article proposed a neural sequence chunking model that treats each chunk as a complete unit for labeling and achieved state-of-the-art performance on both text chunking and slot filling tasks.
Abstract: Many natural language understanding (NLU) tasks, such as shallow parsing (i.e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence. Most of the current deep neural network (DNN) based methods consider these tasks as a sequence labeling problem, in which a word, rather than a chunk, is treated as the basic unit for labeling. These chunks are then inferred by the standard IOB (Inside-Outside-Beginning) labels. In this paper, we propose an alternative approach by investigating the use of DNN for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling. Experimental results show that the proposed neural sequence chunking models can achieve start-of-the-art performance on both the text chunking and slot filling tasks.

57 citations


Proceedings Article
12 Feb 2017
TL;DR: This paper investigates the use of DNN for sequence chunking, and proposes three neural models so that each chunk can be treated as a complete unit for labeling, which can achieve start-of-the-art performance on both the text chunking and slot filling tasks.
Abstract: Many natural language understanding (NLU) tasks, such as shallow parsing (i.e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence. Most of the current deep neural network (DNN) based methods consider these tasks as a sequence labeling problem, in which a word, rather than a chunk, is treated as the basic unit for labeling. These chunks are then inferred by the standard IOB (Inside-Outside- Beginning) labels. In this paper, we propose an alternative approach by investigating the use of DNN for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling. Experimental results show that the proposed neural sequence chunking models can achieve start-of-the-art performance on both the text chunking and slot filling tasks.

50 citations


Journal ArticleDOI
TL;DR: This paper proposes a high throughput hash-less chunking method called Rapid Asymmetric Maximum (RAM), instead of using hashes, RAM uses bytes value to declare the cut points, which allows RAM to do fewer comparisons while retaining the CDC property.

41 citations


Journal ArticleDOI
TL;DR: A connectionist autoencoder model, TRACX2, is presented that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights and recognizing these chunks when they re-occur in the input stream.
Abstract: Even newborn infants are able to extract structure from a stream of sensory inputs; yet how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that le...

36 citations


Proceedings ArticleDOI
11 May 2017
TL;DR: Different chunking techniques and algorithms used in data deduplication process are discussed, the taxonomy and comparison of different chunking algorithms is presented, and the research done in design of chunkinggorithms is summarized.
Abstract: Utilization of cloud storage for data backup has been leading technology as it provides scalability and availability. The current requirement of big data backups lead to the performance and storage space problems in cloud environment. Variety of solutions has been proposed to improve the cloud storage performance and to improve the storage space efficiency. Data deduplication is one of the promising techniques to resolve these storage issues. This paper covers the discussion on the background study of data deduplication technique which covers types of deduplication and its performance metrics, further discuss different chunking techniques and algorithms used in data deduplication process. The taxonomy and comparison of different chunking algorithms is presented in the paper. At the end paper summaries the research done in design of chunking algorithms, their advantages and limitations for future research directions.

19 citations


Posted ContentDOI
06 Jan 2017-bioRxiv
TL;DR: The results reveal a normative rationale for center-surround connectivity in working memory circuitry, call for re-evaluation of memory performance differences that have previously been attributed to differences in capacity, and support a more nuanced view of visual working memory capacity limitations.
Abstract: The nature of capacity limits for visual working memory has been the subject of an intense debate that has relied on models that assume items are encoded independently. Here we propose that instead, similar features are jointly encoded through a chunking process to optimize performance on visual working memory tasks. We show that such chunking can: 1) facilitate performance improvements for abstract capacity-limited systems, 2) be optimized through reinforcement, 3) be implemented by center-surround dynamics, and 4) increase effective storage capacity at the expense of recall precision. Human performance on a variant of a canonical working memory task demonstrated performance advantages, precision detriments, inter-item dependencies, and trial-to-trial behavioral adjustments diagnostic of performance optimization through center-surround chunking. Models incorporating center-surround chunking provided a better quantitative description of human performance in our study as well as in a meta-analytic dataset, and apparent differences in working memory capacity across individuals were attributable to individual differences in the implementation of chunking. Our results reveal a normative rationale for center-surround connectivity in working memory circuitry, call for re-evaluation of memory performance differences that have previously been attributed to differences in capacity, and support a more nuanced view of visual working memory capacity limitations: strategic tradeoff between storage capacity and memory precision through chunking contribute to flexible capacity limitations that include both discrete and continuous aspects.

19 citations


Journal ArticleDOI
TL;DR: The XHAIL algorithm for ILP is extended with a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, a better usage of modern solver technology using recently developed optimisation methods, and a time budget that permits the usage of suboptimal results.
Abstract: Inductive Logic Programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising a set of rules given background knowledge and constraints for the search space. We focus on extending the XHAIL algorithm for ILP which is based on Answer Set Programming and we evaluate our extensions using the Natural Language Processing application of sentence chunking. With respect to processing natural language, ILP can cater for the constant change in how we use language on a daily basis. At the same time, ILP does not require huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions we extend XHAIL with (i) a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, (ii) a better usage of modern solver technology using recently developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results. We evaluate these improvements on the task of sentence chunking using three datasets from a recent SemEval competition. Results show that our improvements allow for learning on bigger datasets with results that are of similar quality to state-of-the-art systems on the same task. Moreover, we compare the hypotheses obtained on datasets to gain insights on the structure of each dataset.

17 citations


Book ChapterDOI
01 Jul 2017
TL;DR: I suspect that you perceived more letters from later strings than from earlier ones, given that each stimulus was the same eight letters long, why should that be?
Abstract: I suspect that you perceived more letters from later strings than from earlier ones. But given that each stimulus was the same eight letters long, why should that be? Miller et al. (1954) showed Harvard undergraduates pseudoword letter strings like those above for very brief presentations (a tenth of a second) using a tachistoscope. The average number of letters correctly reported for the four types of stimuli were, in order, 53, 69, 77 and 87 percent. The pseudowords differed in their ‘order of approximation to English’ (AtoE). CVGJCDHM exemplifies zero-order AtoE strings – they are made up of letters of English, but these are sampled with equal probability of occurrence (1 in 26). RPITCQET exemplifies first-order AtoE strings – made up of letters of English, but sampled according to their frequency of occurrence in the written language (as in opening a book at random, sticking a pin in the page, and choosing the pinned letter [e.g. ‘r’]; repeat). UMATSORE exemplifies second-order AtoE – these reflect the

15 citations


Journal ArticleDOI
TL;DR: A smart classroom storage management system (SCSMS) which consists of new adaptive chunking and XOR reference matrix based erasure coding techniques for multimedia devices with higher input/output performance and low energy consumption is proposed.
Abstract: With the recent big-data processing in multimedia devices becoming a popular application, a fast and energy efficient storage area network system for smart classroom is required. Traditional storage management system for smart classroom show low performance when small read and write operations are executed. This paper proposes a smart classroom storage management system (SCSMS) which consists of new adaptive chunking and XOR reference matrix based erasure coding techniques for multimedia devices with higher input/output performance and low energy consumption. The SCSI initiator is installed in multimedia devices such as smart TVs, smart phones and personal computers. The proposed adaptive chunking and exclusive-or (XOR) reference matrix-redundant array of inexpensive disks (XRM-RAID)are provided at a target server based on flash array storage, respectively. Adaptive chunking differs from traditional chunking in that it reduces the number of read and write operations by merging small files into a united chunk. XRM-RAID differs from existing RAID in that it reduces the number of XOR operations to generate parity data in the RAID system. This paper provides web based monitoring application of the proposed SCSMS. Experimental results show that the energy consumption of the proposed SCSMS is improved by 32 %, 42 % and 58 % compared to Huang et al., Kim et al. and Scott et al. with respect to file size and buffer size. In terms of the average write throughput, the proposed SCSMS has higher performance by 22 %, 32 % and 56 % compared to Huang et al., Kim et al. and Scott et al. with respect to file size and buffer size.

14 citations


Journal Article
TL;DR: Evidence is provided that chunking shapes sentence processing at multiple levels of linguistic abstraction, consistent with a recent theoretical proposal by Christiansen and Chater (2016).

13 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the problem of Korean-to-English translation of the following sentences: "http://www.youtube.com/watch?feature=youtu.youtu.
Abstract: 배경 및 목적: 덩이짓기는 장기기억을 활용하여 단기기억 용량의 효율성을 높이는 기제로 알려져 있다. 본 연구는 문장 순과 무선 순의 단어목록 회상과 대칭 및 비대칭 조건의 시각과제 수행도를 통해 언어발달지체 아동과 일반아동의 덩이짓기 능력을 비교하고, 언어발달지체 아동들의 문장 따라말하기에서의 어려움이 이들의 덩이짓기 능력의 결함과 관련되었는지를 알아보고자...

Proceedings Article
01 Aug 2017
TL;DR: In this paper, a neural network-based toolkit named NNVLP for Vietnamese language processing tasks including POS tagging, chunking, and named entity recognition is presented, which is a combination of bidirectional Long Short-Term Memory (Bi-LSTM), CNN, and CRF.
Abstract: This paper demonstrates neural network-based toolkit namely NNVLP for essential Vietnamese language processing tasks including part-of-speech (POS) tagging, chunking, Named Entity Recognition (NER). Our toolkit is a combination of bidirectional Long Short-Term Memory (Bi-LSTM), Convolutional Neural Network (CNN), Conditional Random Field (CRF), using pre-trained word embeddings as input, which outperforms previously published toolkits on these three tasks. We provide both of API and web demo for this toolkit.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: A new chunking algorithm, Elastic Chunking, which can achieve the higher deduplication ratio and throughput and leveraging dynamic adjustment policy, elastic chunk can quickly find the boundary to remove the consecutive maximum chunk sequences.
Abstract: Data chunking is one of the most important issues in a deduplication system, which not only determines the effectiveness of deduplication such as deduplication ratio, but also impacts the modification overhead It breaks the file into chunks to find out the redundancy by fingerprint comparisons The content-defined chunking algorithms such as TTTD, BSW CDC, and RC, can resist the boundary shift problem caused by small modifications However, we observe that there exist a lot of consecutive maximum chunk sequences in various benchmarks These consecutive maximum chunk sequences will lead to local boundary shift problem when facing small modifications Based on this observation, we propose a new chunking algorithm, Elastic Chunking By leveraging dynamic adjustment policy, elastic chunk can quickly find the boundary to remove the consecutive maximum chunk sequences To evaluate the performance, we implement a prototype and conduct extensive experiments based on synthetic and realistic datasets Compared with TTTD, BSW CDC and RC algorithms, proposed chunking algorithm can achieve the higher deduplication ratio and throughput

Proceedings ArticleDOI
23 Jun 2017
TL;DR: This paper proposes the parsing model for Kannada sentences using Natural Language Tool Kit (NLTK), and presents Parts of Speech tagging and Chunking using Conditional Random Fields.
Abstract: Parts of Speech tagging is consider as the second step in Natural Language Processing. In this paper we present Parts of Speech tagging and Chunking using Conditional Random Fields. We used Kannada corpus of 3000 sentences collected from newspaper. We train with 2500 sentences and tested with 500 sentences. The comparison between Machine output and Human tagging yield an accuracy of 96.86% in tagging and chunking. We propose the parsing model for Kannada sentences using Natural Language Tool Kit (NLTK).

Journal ArticleDOI
01 Jan 2017
TL;DR: This paper investigated the different meanings and chunking patterns two words have in Mandarin written and conversational discourses, and found that in both writing and conversation, zhihou favors past and yihou favor future.
Abstract: Although much has been written about the differences between written and conversational discourses, less work has been done on how these two discourse types differ in terms of chunking patterns. This study investigates the different meanings and chunking patterns two words have in Mandarin written and conversational discourses. To overcome the problem of comparability between written and conversational corpora, instead of using a single word, I use two near-synonymous Mandarin words, zhihou and yihou , both of which mean roughly ‘after’ or ‘later,’ and compare their meaning and chunking patterns in written and spoken corpora. The investigation regarding semantic distinctions revealed that in both writing and conversation, zhihou favors past and yihou favors future, and that in writing but not in conversation zhihou is more often used with immediate high transitivity actions and causal relations, whereas yihou is more often used with low transitivity states. Regarding chunking patterns, whereas conversation preserves different stages of chunking, written discourse mainly has the final clear-cut stage. This study demonstrates the importance of grounding grammatical investigations on discourse types and of the possible usefulness of using near-synonymous words or grammatical constructions as a way of getting round the problem of comparability.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: A new optimization DECAF is presented that optimizes recursive task parallel (RTP) programs by reducing the task creation and termination overheads and extends the traditional loop chunking technique to perform load-balanced chunking, at runtime, based on the number of available worker threads.
Abstract: We present a new optimization DECAF that optimizes recursive task parallel (RTP) programs by reducing the task creation and termination overheads DECAF reduces the task termination (join) operations by aggressively increasing the scope of join operations (in a semantics preserving way), and eliminating the redundant join operations discovered on the way Further, DECAF extends the traditional loop chunking technique to perform load-balanced chunking, at runtime, based on the number of available worker threads This helps reduce the redundant parallel tasks at different levels of recursion We also discuss the impact of exceptions on our techniques and extend them to handle RTP programs that may throw exceptions We implemented DECAF in the X10v23 compiler and tested it over a set of benchmark kernels on two different hardwares (a 16-core Intel system and a 64-core AMD system) With respect to the base X10 compiler extended with loop-chunking of Nandivada et al [26] (LC), DECAF achieved a geometric mean speed up of 214× and 253× on the Intel and AMD system, respectively We also present an evaluation with respect to the energy consumption on the Intel system and show that on average, compared to the LC versions, the DECAF versions consume 712% less energy

Posted Content
TL;DR: In this article, a GPU implementation of the Viterbi and forward-backward algorithm, achieving decoding speedups of up to 5.2x over their serial implementation running on different computer architectures and 6093x over OpenFST.
Abstract: Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others. Parallelizing finite state algorithms on graphics processing units (GPUs) would benefit many areas of NLP. Although researchers have implemented GPU versions of basic graph algorithms, limited previous work, to our knowledge, has been done on GPU algorithms for weighted finite automata. We introduce a GPU implementation of the Viterbi and forward-backward algorithm, achieving decoding speedups of up to 5.2x over our serial implementation running on different computer architectures and 6093x over OpenFST.

Posted Content
TL;DR: This paper demonstrates neural network-based toolkit namely NNVLP for essential Vietnamese language processing tasks including part-of-speech tagging, chunking, Named Entity Recognition (NER), which outperforms previously published toolkits on these three tasks.
Abstract: This paper demonstrates neural network-based toolkit namely NNVLP for essential Vietnamese language processing tasks including part-of-speech (POS) tagging, chunking, named entity recognition (NER). Our toolkit is a combination of bidirectional Long Short-Term Memory (Bi-LSTM), Convolutional Neural Network (CNN), Conditional Random Field (CRF), using pre-trained word embeddings as input, which achieves state-of-the-art results on these three tasks. We provide both API and web demo for this toolkit.

Patent
13 Jul 2017
TL;DR: In this article, a storage system receives a number of input/output (IO) request transactions at the storage system having multiple storage devices, and the system tags the IO request transaction and/or the associated child IO requests with a tag identifier.
Abstract: In one embodiment, a storage system receives a number of input/output (IO) request transactions at the storage system having multiple storage devices. For each of the plurality of IO request transactions, the system determines a number of child IO requests required to complete the IO request transaction. The system tags the IO request transaction and/or the associated child IO requests with a tag identifier. For each of the child requests that is a write IO request, the system determines an optimal write IO request size, segments the write IO request into a number of sub-IO write requests, each having an optimal request size, and interleaves sub-IO write requests with read IO requests for servicing to avoid impact in performance to read IO requests for a mixed IO workload.

Proceedings ArticleDOI
01 Apr 2017
TL;DR: A GPU implementation of the Viterbi and forward-backward algorithm is introduced, achieving speedups of up to 4x over the serial implementations running on different computer architectures and 3335x over widely used tools such as OpenFST.
Abstract: Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others. Parallelizing finite state algorithms on graphics processing units (GPUs) would benefit many areas of NLP. Although researchers have implemented GPU versions of basic graph algorithms, no work, to our knowledge, has been done on GPU algorithms for weighted finite automata. We introduce a GPU implementation of the Viterbi and forward-backward algorithm, achieving speedups of up to 4x over our serial implementations running on different computer architectures and 3335x over widely used tools such as OpenFST.

Posted Content
TL;DR: This work designs a new model called “high order LSTM” to predict multiple tags for the current token which contains not only the current tag but also the previous several tags and proposes a new method called Multi-Order Bi-STM (MO-BiLSTM) which combines low order and high order L STMs together.
Abstract: Existing neural models usually predict the tag of the current token independent of the neighboring tags. The popular LSTM-CRF model considers the tag dependencies between every two consecutive tags. However, it is hard for existing neural models to take longer distance dependencies of tags into consideration. The scalability is mainly limited by the complex model structures and the cost of dynamic programming during training. In our work, we first design a new model called "high order LSTM" to predict multiple tags for the current token which contains not only the current tag but also the previous several tags. We call the number of tags in one prediction as "order". Then we propose a new method called Multi-Order BiLSTM (MO-BiLSTM) which combines low order and high order LSTMs together. MO-BiLSTM keeps the scalability to high order models with a pruning technique. We evaluate MO-BiLSTM on all-phrase chunking and NER datasets. Experiment results show that MO-BiLSTM achieves the state-of-the-art result in chunking and highly competitive results in two NER datasets.

Patent
28 Sep 2017
TL;DR: In this paper, the authors proposed a data transmission method consisting of simultaneously receiving data simultaneously transmitted by multiple transmitters, the data comprises multiple data frames simultaneously transmitted at least one transmitter, and the chunking acknowledgment message frame contains a group information identifier and data receiving state information, wherein the group identifier is used to indicate multiple transmitter which belong to the same group pre-set correspondingly by the chunk-aware message frame and the data receiving states are used to convey the receiving states of various transmitters belonging to different groups.
Abstract: Disclosed in the embodiments of the present invention is a data transmission method. The method comprises: simultaneously receiving data simultaneously transmitted by multiple transmitters, the data comprises multiple data frames simultaneously transmitted by at least one transmitter; generating a chunking acknowledgment message frame according to the receiving state of the data, the chunking acknowledgment message frame contains a group information identifier and data receiving state information, wherein the group information identifier is used to indicate multiple transmitters which belong to the same group pre-set correspondingly by the chunking acknowledgment message frame, and the data receiving state information is used to indicate the data receiving states of various transmitters which belong to the same group, the data receiving states comprise the receiving states of the multiple data frames simultaneously transmitted by at least one transmitter; and transmitting the chunking acknowledgment message frame to the multiple transmitters. By means of the present invention, the technical problems of being not beneficial to effective spectrum utilization and being not beneficial to power saving of user equipment which are caused by replying an ACK frame to the user equipment in order are solved, thereby improving the effective spectrum utilization.

Journal Article
TL;DR: This paper evaluated the most popular used chunking algorithm Two Threshold Two Divisor (TTTD) using three different hashing functions that can be used with it, and implemented each one as a fingerprinting and hashing algorithm and then compared the execution time and deduplication elimination ratio.
Abstract: Data deduplication is a data reduction technology that is worked by detecting and eliminating data redundancy and keep only one copy of these data, and is often used to reduce the storage space and network bandwidth. While our main motivation has been low band-width synchronization applications such as Low Bandwidth Network File System (LBNFS), deduplication is also useful in archival file systems. A number of researchers have advocated a scheme for archival. Data deduplication now is one of the hottest research topics in the backup storage area. In this paper, A survey on different chunking algorithms of data deduplication are discussed, and studying the most popular used chunking algorithm Two Threshold Two Divisor (TTTD), and evaluated this algorithm using three different hashing functions that can be used with it (Rabin Finger print, Adler, and SHA1) implemented each one as a fingerprinting and hashing algorithm and then compared the execution time and deduplication elimination ratio which was the first time this comparison performed and the result is shown below.

01 Jan 2017
TL;DR: A longitudinal cross-sectional analysis of child language data collected from four Navajo speaking children was performed by as discussed by the authors. Butler et al. found that children use morphologically and phonologically reduced units and exhibit fusion with units outside the verb such as postpositions.
Abstract: This dissertation presents an analysis of child acquisition and production of the Navajo verb construction. My data shows that Navajo children extract meaningful verb units that do not adhere to the linguistic boundaries normally ascribed to the Navajo verb. Through my data, I have observed that children use morphologically and phonologically reduced units. They produce verb constructions that exhibit fusion with units outside the verb such as postpositions. As the children acquire larger units, or chunks, morphophonological interactions are preserved. This dissertation is a longitudinal cross-sectional analysis of child language data collected from four Navajo speaking children. The children, all male, range in age from 4 years 11 months through 11 years 2 months. My fieldwork included obtaining permission, recruiting participants, recording, transcription, coding, corpus building, and analysis. This study uses insights from usage-based linguistic theories and approaches to the study of child language acquisition in a polysynthetic and highly fusional language. Emphasis on

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The paper presents the test results of the program execution on a collection of 100 clauses and shows that the use of bi-grams can considerably increase the characteristics of the spelling corrector.
Abstract: The work presents the task of spelling correction realized in a batch mode with support of syntactic context. It uses the model of incomplete syntactic analysis, or chunking, described in Tesniere's dependencies. In order to improve the efficiency of chunking, the authors use a PoS-tagged dictionary of bi-grams. The program is written in Java; it uses UIMA framework and NLP@Cloud library. The paper presents the test results of the program execution on a collection of 100 clauses. It shows that the use of bi-grams can considerably increase the characteristics of the spelling corrector.




Patent
18 May 2017
TL;DR: In this paper, the authors provide techniques for dynamically creating index files for streaming media based on a determined chunking strategy, which can be determined using historical data of any of a variety of factors, such as Quality of Service (QoS) information.
Abstract: Techniques are provided for dynamically creating index files for streaming media based on a determined chunking strategy. The chunking strategy can be determined using historical data of any of a variety of factors, such as Quality of Service (QoS) information. By using historical data in this manner, index files can be generated using chunking strategies that can improve these factors over time.

Journal ArticleDOI
TL;DR: This paper proposes new packet‐chunking schemes aimed at meeting both application requirements and improving achievable router throughput, and determines that these schemes provide excellent performance in reducing the number of outgoing packets from the router while meeting various delay requirements.
Abstract: Summary With the recent advances in machine-to-machine communications, huge numbers of devices have become connected and massive amounts of traffic are exchanged. Machine-to-machine applications typically generate small packets, which can profoundly affect the network performance. Namely, even if the packet arrival rate at the router is lower than the link bandwidth, bits per second, it can exceed the router forwarding capacity, which indicates the maximum number of forwarded packets per second. This will cause the decrease in the network throughput. Therefore, eliminating the packets per second limitation by chunking small packets will enable machine-to-machine cloud services to spread further. This paper proposes new packet-chunking schemes aimed at meeting both application requirements and improving achievable router throughput. In our schemes, multiple buffers, each of which accommodates packets classified based on their delay requirement, are installed in parallel. Herein, we report on analysis of the theoretically performance of these schemes, which enabled us to derive some important features. We also propose a scheme whereby a single chunking buffer and parallel multiple buffers were arranged in tandem. Through our simulation and numerical results, we determined that these schemes provide excellent performance in reducing the number of outgoing packets from the router while meeting various delay requirements.