scispace - formally typeset
Search or ask a question

Showing papers on "Queue management system published in 2020"


Proceedings ArticleDOI
03 May 2020
TL;DR: The power and flexibility of Corundum is demonstrated by the implementation of a microsecond-precision time-division multiple access (TDMA) hardware scheduler to enforce a TDMA schedule at 100 Gbps line rate with no CPU overhead.
Abstract: Corundum is an open-source, FPGA-based prototyping platform for network interface development at up to 100 Gbps and beyond. The Corundum platform includes several core features to enable real-time, high-line-rate operations including: a high-performance datapath, 10G/25G/100G Ethernet MACs, PCI Express gen 3, a custom PCIe DMA engine, and native high-precision IEEE 1588 PTP timestamping. A key feature is extensible queue management that can support over 10,000 queues coupled with extensible transmit schedulers, enabling fine-grained hardware control of packet transmission. In conjunction with multiple network interfaces, multiple ports per interface, and per-port event-driven transmit scheduling, these features enable the development of advanced network interfaces, architectures, and protocols. The software interface to these hardware features is a high-performance driver for the Linux networking stack. The platform also supports scatter/gather DMA, checksum offloading, receive flow hashing, and receive-side scaling. Development and debugging is facilitated by a comprehensive open-source, Python-based simulation framework that includes the entire system from a simulation model of the driver and PCI express interface to the Ethernet interfaces. The power and flexibility of Corundum is demonstrated by the implementation of a microsecond-precision time-division multiple access (TDMA) hardware scheduler to enforce a TDMA schedule at 100 Gbps line rate with no CPU overhead.

55 citations


Journal ArticleDOI
19 Apr 2020-Sensors
TL;DR: This paper proposes a modified architecture of the Long-Term Evolution mobile network to provide services for the Internet of Things by allocating a narrow bandwidth and transferring the scheduling functions from the eNodeB base station to an NB-IoT controller, and develops “smart queue” management algorithms for the IoT traffic prioritization.
Abstract: This paper proposes a modified architecture of the Long-Term Evolution (LTE) mobile network to provide services for the Internet of Things (IoT). This is achieved by allocating a narrow bandwidth and transferring the scheduling functions from the eNodeB base station to an NB-IoT controller. A method for allocating uplink and downlink resources of the LTE/NB-IoT hybrid technology is applied to ensure the Quality of Service (QoS) from end-to-end. This method considers scheduling traffic/resources on the NB-IoT controller, which allows eNodeB planning to remain unchanged. This paper also proposes a prioritization approach within the IoT traffic to provide End-to-End (E2E) QoS in the integrated LTE/NB-IoT network. Further, we develop “smart queue” management algorithms for the IoT traffic prioritization. To demonstrate the feasibility of our approach, we performed a number of experiments using simulations. We concluded that our proposed approach ensures high end-to-end QoS of the real-time traffic by reducing the average end-to-end transmission delay.

39 citations


Journal ArticleDOI
TL;DR: A distributed content access and delivery algorithm where the node assignments are made by every cluster head independently is developed and it is shown that the algorithm converges to the optimal policy with the trade-off in total queue backlog and achieves a superior performance compared with some other D2D content sharing policies.
Abstract: The paper proposes a novel framework based on the contract theory and Lyapunov optimization for content sharing in a wireless content delivery network (CDN) with edge caching and device-to-device (D2D) communications. The network is partitioned into a set of clusters. In a cluster, users can share contents via D2D links in coordination with the cluster head. Upon receiving the content request from any user in its cluster, the cluster head either delivers the content itself or forwards the request to another node, i.e., a base station (BS) or another user in the cluster. The content access at the BS and in each cluster is modeled as a queuing system, where arrivals represent the content requests directed to respective nodes. The objective is to assign content delivery nodes to stabilize all queues while minimizing the time-averaged network cost given incomplete information about content sharing costs of the users and unknown distribution of the network state defined by users’ locations and their cached/requested content. The proposed framework allows the users to truthfully reveal their content sharing expenditures, minimize the time-averaged network cost and stabilize the queuing system representing the CDN. Based on this framework, a distributed content access and delivery algorithm where the node assignments are made by every cluster head independently is developed. It is shown that the algorithm converges to the optimal policy with the trade-off in total queue backlog and achieves a superior performance compared with some other D2D content sharing policies.

31 citations


Proceedings ArticleDOI
13 Jul 2020
TL;DR: This paper is the first to study the effect of selfish learning in a queuing system, where the learners compete for resources, but rounds are not all independent: the number of packets to be routed at each round depends on the success of the routers in the previous rounds.
Abstract: Bounding the price of anarchy, which quantifies the damage to social welfare due to selfish behavior of the participants, has been an important area of research in algorithmic game theory. In this paper, we study this phenomenon in the context of a game modeling queuing systems: routers compete for servers, where packets that do not get service will be resent at future rounds, resulting in a system where the number of packets at each round depends on the success of the routers in the previous rounds. We model this as an (infinitely) repeated game, where the system holds a state (number of packets held by each queue) that arises from the results of the previous rounds. We assume that routers satisfy the no-regret condition, e.g. they use learning strategies to identify the server where their packets get the best service. Classical work on repeated games makes the strong assumption that the subsequent rounds of the repeated games are independent (beyond the influence on learning from past history). The carryover effect caused by packets remaining in this system makes learning in our context result in a highly dependent random process. We analyze this random process and find that if the capacity of the servers is high enough to allow a centralized and knowledgeable scheduler to get all packets served even when service rates are halved, and queues use no-regret learning algorithms, then the expected number of packets in the queues will remain bounded throughout time, assuming older packets have priority. This paper is the first to study the effect of selfish learning in a queuing system, where the learners compete for resources, but rounds are not all independent: the number of packets to be routed at each round depends on the success of the routers in the previous rounds.

20 citations


Journal ArticleDOI
19 May 2020
TL;DR: This work considered a single-server queuing system with a finite buffer, where two types of customers arrive according to a batch marked Markov arrival process, and describes the behavior of the system by a multi-dimensional continuous-time Markov chain and calculates a number of the stationary performance measures including the various loss probabilities as well as the distribution function of the waiting time of priority customers.
Abstract: The use of priorities allows us to improve the quality of service of inhomogeneous customers in telecommunication networks, inventory and health-care systems. An important modern direction of research is to analyze systems in which priority of a customer can be changed during his/her stay in the system. We considered a single-server queuing system with a finite buffer, where two types of customers arrive according to a batch marked Markov arrival process. Type 1 customers have non-preemptive priority over type 2 customers. Low priority customers are able to receive high priority after the random amount of time. For each non-priority customer accepted into the buffer, a timer, which counts a random time having a phase type distribution, is switched-on. When the timer expires, the customer with some probability leaves the system unserved and with the complimentary probability gains the high priority. Such a type of queues is typical in many health-care systems, contact centers, perishable inventory, etc. We describe the behavior of the system by a multi-dimensional continuous-time Markov chain and calculate a number of the stationary performance measures of the system including the various loss probabilities as well as the distribution function of the waiting time of priority customers. The illustrative numerical examples giving insights into the system behavior are presented.

19 citations


Journal ArticleDOI
TL;DR: A service system that does not have the capability of monitoring and disclosing its real-time congestion level, however, the customers can observe and post their observations online, and the system can be monitored and disclosed.
Abstract: We study a service system that does not have the capability of monitoring and disclosing its real-time congestion level. However, the customers can observe and post their observations online, and f...

17 citations


Journal ArticleDOI
TL;DR: Evaluating the impact of capacity reallocation and queue management on customer waiting time at an all-you-can-eat campus dining service finds that reallocating underutilized workers to high demand MTO stations can reduce the average waiting time.

16 citations


Journal ArticleDOI
TL;DR: Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
Abstract: The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.

16 citations


Proceedings ArticleDOI
06 Jul 2020
TL;DR: It is shown that, in both cases, the problem of jointly determining (a) content placements and (b) service rates admits a poly-time, 1 1/e approximation algorithm, and that both approximations yield good solutions in practice, significantly outperforming competitors.
Abstract: We consider a cache network in which intermediate nodes equipped with caches can serve content requests. We model this network as a universally stable queuing system, in which packets carrying identical responses are consolidated before being forwarded downstream. We refer to resulting queues as M/M/1c or counting queues, as consolidated packets carry a counter indicating the packet’s multiplicity. Cache networks comprising such queues are hard to analyze; we propose two approximations: one via M/M/∞ queues, and one based on M/M/1c queues under the assumption of Poisson arrivals. We show that, in both cases, the problem of jointly determining (a) content placements and (b) service rates admits a poly-time, 1 1/e approximation algorithm. Numerical evaluations indicate−that both approximations yield good solutions in practice, significantly outperforming competitors.

11 citations


Proceedings ArticleDOI
28 Jul 2020
TL;DR: This paper presents a mobile-augmented smart queue management system that can be easily configured with an operational Hospital Management Information System (HMIS) and provides multiple interfaces for token generation and consumption on mobile devices integrated with hospital service counters, while using smart algorithms fortoken generation and allocation.
Abstract: Management of high patient loads at tertiary hospitals presents a significant challenge in streamlining healthcare service delivery. Patients often need to queue up at various service areas in hospitals such as at registration, laboratory test and bill payment counters. Queue Management Systems (QMS) present a viable solution for patient management in such scenarios. However, conventional QMS are not generic by design, depend on hardware components or do not provide a comprehensive end-to-end solution that caters to the complete patient workflow. This paper presents a mobile-augmented smart queue management system that can be easily configured with an operational Hospital Management Information System (HMIS). It provides multiple interfaces for token generation and consumption on mobile devices integrated with hospital service counters, while using smart algorithms for token generation and allocation. The solution is comprehensive as that it caters to streamlined queue management across multiple hospital service areas using a single token for a patient, which improves patient experience and also helps the hospital administration track and optimize key performance metrics. We present the architectural and operational design of this system, along with an illustration of the use of this system in tracking productivity of service counter operators in a pilot implementation.

11 citations


Book ChapterDOI
14 Sep 2020
TL;DR: In this paper, a Flexible Random Early Detection (FXRED) scheme is proposed to improve the performance of RED by auto-tuning its drop pattern according to the observed load situation.
Abstract: With recent advancements in communication networks, congestion control remains a research focus. Active Queue Management (AQM) schemes are normally used to manage congestion in routers. Random Early Detection (RED) is the most popular AQM scheme. However, RED lacks self-adaptation mechanism and it is sensitive to parameter settings. Many enhancements of RED were proposed and are yet to provide stable performance under different traffic load situations. In this paper, AQM scheme called Flexible Random Early Detection (FXRED) is proposed. Unlike other RED’s enhancements with static drop patterns, FXRED recognizes the state of the current network’s traffic load and auto tune its drop pattern suitable to the observed load situation in order to maintain stable and better performance. Results of the experiments conducted have shown that regardless of traffic load’s fluctuation, FXRED provides optimal performance and efficiently manages the queue.

Journal ArticleDOI
TL;DR: This study proposes a large-scale cost-saving and load-balancing scheduling model, called HDCBS, for the optimization of system throughput, used to model each computing node as an independent queuing system and to obtain the average system wait time and average task response time.
Abstract: Cloud-based scientific workflow systems can play an important role in the development of cost-effective bioinformatics analysis applications. There are differences in the cost control and performance of many kinds of servers in heterogeneous cloud data centers for bioinformatics workflows running, which can lead to imbalance between operational/maintenance management costs and quality of service of server clusters. A task scheduling model that responds to the peaks and valleys of task sequencing—the number of tasks that arrive in a given unit of time—is related to indicators such as cost saving, load balancing and system performance (average task wait time, average response time and throughput). This study proposes a large-scale cost-saving and load-balancing scheduling model, called HDCBS, for the optimization of system throughput. First, queuing theory is used to model each computing node as an independent queuing system and to obtain the average system wait time and average task response time. Then, using convex optimization theory, a task assignment solution is proposed with a load-balancing mechanism. The validity of the task scheduling model is verified by simulation experiments, and the model performance is further validated through a comparison with other frequently used scheduling methods. The simulation results show that the credibility of HDCBS is greater than 95% in task scheduling.

Journal ArticleDOI
Hongfang Gong1, Renfa Li1, Jiyao An1, Yang Bai1, Keqin Li1 
TL;DR: This paper investigates resource provisioning in cyber-physical systems (CPSs) by developing a new definition of anelasticity and shows that the system can maintain elastic invariance in adaptive adjustment parameters.
Abstract: This paper investigates resource provisioning in cyber-physical systems (CPSs) by developing a new definition of anelasticity. A flat semi-dormant multicontroller (FSDMC) model is established on a special type of CPS platform named arbitrated networked control system with dual communication channels. A novel, quantitative, and formal definition of anelasticity for the FSDMC is proposed. A new finite capacity M/M/c queuing system with ${N}$ -policy and asynchronous multiple working vacations of partial servers is established, and the FSDMC is modeled as a quasi-birth-and-death process to obtain the stationary probability distribution of the system. Based on the queueing model, we quantify various performance indices of the system to build a nonlinear cost-performance ratio (CPR) function. An optimization model is presented to minimize the CPR. A particle swarm optimization (PSO) algorithm is used to find the optimum solution of the optimization model and obtain the optimal configuration values of the system parameters under stability condition. By changing the system parameters, the sensitivity of the system performance indices and the CPR are analyzed, respectively. The unexpected workload varies randomly over time. Thus, an M/M/1/K queue is constructed in a Markovian environment by employing a three-state, irreducible Markov process. In this queue, the conditional average queue length and the probabilities of the three-state process are calculated. Then, the anelasticity value of the system is precisely determined. When the average arrival rate exceeds the average service rate in the queueing system, an optimal CPR unchanged adaptive algorithm based on PSO is designed to dynamically adjust the controller service rate. Extensive numerical results show the usefulness and effectiveness of the proposed techniques and exhibit that the system can maintain elastic invariance in adaptive adjustment parameters.

Journal ArticleDOI
29 Feb 2020
TL;DR: A mathematical model is proposed to process customers by using generalized spectral expansion method for accurate assessment of performance, numerical results have been depicted in graphical form.
Abstract: Received: 2 September 2019 Accepted: 25 November 2019 Computing and logistics management systems have a wide area of applications with compound Poisson process Markov system with a batch servicing facility where customers arrive either independently or batches for service into the multi-server queues. The service of the customers is processed either independently or batch-wise based on the requirement of various sizes. The order of service has been found to follow First Come First Service while customers arrive according to the exponential distribution. A mathematical model is proposed to process customers by using generalized spectral expansion method. The explicit type required to service the system is measured as buffer size. For accurate assessment of performance, numerical results have been depicted in graphical form.

Journal ArticleDOI
14 May 2020
TL;DR: A wireless multiservice network scheme model described as a queuing system with unreliable servers and a finite buffer within the LSA framework is proposed, to analyze main system performance measures: blocking probability, average number of requests in queue, and average queue length depending on LSA frequencies’ availability.
Abstract: Given the limited frequency band resources and increasing volume of data traffic in modern multiservice networks, finding new and more efficient radio resource management (RRM) mechanisms is becoming indispensable. One of the implemented technologies to solve this problem is the licensed shared access (LSA) technology. LSA allows the spectrum that has been licensed to an owner, who has absolute priority on its utilization, to be used by other participants (i.e., tenants). Owner priority impacts negatively on the quality of service (QoS) by reducing the data bit rate and interrupting user services. In this paper, we propose a wireless multiservice network scheme model described as a queuing system with unreliable servers and a finite buffer within the LSA framework. The aim of this work is to analyze main system performance measures: blocking probability, average number of requests in queue, and average queue length depending on LSA frequencies’ availability.

Journal ArticleDOI
TL;DR: This study addresses a novel MLCFLP that includes a classical queuing system with jockeying, which allows the applicants/customers to receive service from the other layers of the network.

Patent
02 Jun 2020
TL;DR: In this article, queue management logic tracks how long certain packet(s) such as a designated marker packet, remain in a queue, and produces a measure of delay for the queue, referred to herein as the queue delay.
Abstract: A network device organizes packets into various queues, in which the packets await processing. Queue management logic tracks how long certain packet(s), such as a designated marker packet, remain in a queue. Based thereon, the logic produces a measure of delay for the queue, referred to herein as the “queue delay.” Based on a comparison of the current queue delay to one or more thresholds, various associated delay-based actions may be performed, such as tagging and/or dropping packets departing from the queue, or preventing addition enqueues to the queue. In an embodiment, a queue may be expired based on the queue delay, and all packets dropped. In other embodiments, when a packet is dropped prior to enqueue into an assigned queue, copies of some or all of the packets already within the queue at the time the packet was dropped may be forwarded to a visibility component for analysis.

Journal ArticleDOI
TL;DR: A queue assessment model is developed to evaluate the inflow of walk-in outpatients in a busy public hospital of an emerging economy, in the absence of appointment systems, and a dynamic framework dedicated towards the practical implementation of the proposed model is constructed, for continuous monitoring of the queue system.
Abstract: PURPOSE: The aim of this research study is to develop a queue assessment model to evaluate the inflow of walk-in outpatients in a busy public hospital of an emerging economy, in the absence of appointment systems, and construct a dynamic framework dedicated towards the practical implementation of the proposed model, for continuous monitoring of the queue system. DESIGN/METHODOLOGY/APPROACH: The current study utilizes data envelopment analysis (DEA) to develop a combined queuing-DEA model as applied to evaluate the wait times of patients, within different stages of the outpatients' department at the Combined Military Hospital (CMH) in Lahore, Pakistan, over a period of seven weeks (23rd April to 28th May 2014). The number of doctors/personnel and consultation time were considered as outputs, where consultation time was the non-discretionary output. The two inputs were wait time and length of queue. Additionally, VBA programming in Excel has been utilized to develop the dynamic framework for continuous queue monitoring. FINDINGS: The inadequate availability of personnel was observed as the critical issue for long wait times, along with overcrowding and variable arrival pattern of walk-in patients. The DEA model displayed the "required" number of personnel, corresponding to different wait times, indicating queue build-up. ORIGINALITY/VALUE: The current study develops a queue evaluation model for a busy outpatients' department in a public hospital, where "all" patients are walk-in and no appointment systems. This model provides vital information in the form of "required" number of personnel which allows the administrators to control the queue pre-emptively minimizing wait times, with optimal yet dynamic staff allocation. Additionally, the dynamic framework specifically targets practical implementation in resource-poor public hospitals of emerging economies for continuous queue monitoring.

Journal ArticleDOI
TL;DR: This study analyzes the proficiency of a single-server Markovian queuing system M/M/1/N with unpleasant services and encouraged arrivals to compare the policies for service delivery in a context of social planning to improve the social welfare.
Abstract: In this study, we analyze the proficiency of a single-server Markovian queuing system M/M/1/N with unpleasant services and encouraged arrivals to compare the policies for service delivery, which is...

Book ChapterDOI
14 Sep 2020
TL;DR: In this article, a controllable queuing system in which the number of switching service channels monitor and modify at control time points spaced apart by a fixed time step is considered.
Abstract: This paper deals with a controllable queuing system in which the number of switching service channels monitor and modify at control time points spaced apart by a fixed time step. At transition from step to step, the intensity of the simplest incoming flow changes in accordance with a Markov’s chain. The system is in a stationary mode between the steps. A cost function is the minimization of the total average cost of the system over a multi-step planning period. The problem is to find a channel switching strategy. The parametric structure of an optimal strategy significantly simplifies its construction.

Proceedings ArticleDOI
01 Aug 2020
TL;DR: Two causal decompositions of the services in a queuing system are considered which use the elements of Service Systems Theory and a Generalized Net (GN) representation is proposed.
Abstract: Queuing systems are an important part of virtually all service networks. The problem of determining how the Quality of Service (QoS) of a queuing system depends on the qualities of the buffer and the server is not well studied. Two causal decompositions of the services in a queuing system are considered which use the elements of Service Systems Theory. For each of them, a Generalized Net (GN) representation is proposed. The GN models can be used in the study of the QoS composition in overall telecommunication networks.

Journal ArticleDOI
TL;DR: The aim of this paper is to evaluate the performance measures with a single server queuing system using mathematical model developed to study the probability live time of the server using algebraic eigenproperties.
Abstract: Classical queuing theory is playing vital role to study and analyse the performance analysis of real-time servicing systems, production inventory and manufacturing systems, telecommunication systems, modern information and communication technology systems and computing sector. In recent decays, bounded and immeasurable queues have been intensively studied; due to its attractive mathematical features with wide spread applicability. Such a system describes units of work, e.g., particles or customers, arriving at a resource, that stay present for some random duration that is independent of other customers. The aim of this paper is to evaluate the performance measures with a single server queuing system. Mathematical model has been developed to study the probability live time of the server using algebraic eigenproperties. These models are indispensable in real-time systems, manufacturing and communication queuing systems, including wireless networks, mobility, and randomly arriving traffic.

Journal ArticleDOI
01 Sep 2020
TL;DR: In this article, the authors explore the theory of bounded-delay networks and provide the necessary and the sufficient conditions required to have deterministic bounded-delays in the network and present SharpEdge, a scheme that can meet all the above requirements.
Abstract: What are the key properties that a network should have to provide bounded-delay guarantees for the packets? In this paper, we attempt to answer this question. To that end, we explore the theory of bounded-delay networks and provide the necessary and the sufficient conditions required to have deterministic bounded-delays in the network. We prove that as long as a network is work-conserving, independent of the packet scheduling and queue management algorithms used in the switches, it is sufficient to shape the traffic properly at the edge of the network to meet hard bounded-delays in the network. Using the derived theorems, we present SharpEdge, a novel design to meet deterministic bounded-delays in the network. To the best of our knowledge, SharpEdge is the first scheme that can meet all following key properties: (1) it supports coexistence of different classes of traffic, while it can guarantee their different required bounded-delays, (2) it does not require any changes in the core of the network, (3) it supports both periodic and bursty traffic patterns, and (4) it does not require any time synchronization between network devices.

Journal ArticleDOI
TL;DR: In this paper, the authors address a cost optimization problem faced by a user who runs instances of applications in a remote cloud configuration constructed of multiple virtual machines (VMs), each VM runs a single application instance which can execute tasks specific to that application.
Abstract: We address a cost optimization problem faced by a user who runs instances of applications in a remote cloud configuration constructed of multiple virtual machines (VMs). Each VM runs a single application instance which can execute tasks specific to that application. Managing the VMs involves a sophisticated trade-off between cloud-related demands, which are expressed by the provisional costs of leased cloud resources, and exogenous cost demands expressed by service revenues that are typically bound to SLAs. The internal costs may include VM deployment/termination cost, and VM lease cost. The exogenous costs refer to rewards accumulated due to the successfully accomplished tasks being run by each application instance. In the case where the SLA restricts performance to a certain load level at each VM, tasks incoming at VMs that reached that level are rejected. Rejections cause fines deducted against the rewards. The performance level is also quantified, namely, by means of a delay cost, according to the average delay experienced by tasks. Typical examples for specific applications which fall within this class of problems include handling of scientific worklflows and network functioning virtualization (NFV). We model this problem by cost-optimal load balancing to a queuing system with a flexible number of queues, where a queue (VM) can be deployed, can have a task directed to it and can be terminated. We analyze the system by Markov decision process (MDP) and numerically solve it to find the optimal policy, which captures the aforementioned costs and performance constraints. Within this constrained framework, we also investigate the impact of average VM deployment time. We show that the optimal policy possesses decision thresholds which depend on several parameters. We validate policies found by MDP, through directing an exogenous computational tasks flow to a set-up implemented on AWS.

Posted Content
TL;DR: This paper incorporated queuing system in the p-median hub location problem by considering multiple server options and different service rates, and shows the big impact of considering congestion on the hub location network design.
Abstract: Hub location problems have multiple applications in logistic systems, airways industry, supply chain network design, and telecommunication. In the hub location problem, a number of nodes should be selected as the hub nodes to act as the main distributors and other nodes are connected together by these hubs. The input flow to the hub nodes is very large so more often we face the congestion in the hub nodes that causes disturbances in the whole system. Also, we have different service rates which is another cause of disturbance and should be addressed by the models. This paper addresses these issues by providing a model that prevents congestion in the system. We incorporated queuing system in the p-median hub location problem by considering multiple server options and different service rates. CAB dataset (contains 25 US cities) was used in the implementation and our findings show the big impact of considering congestion on the hub location network design.

Proceedings ArticleDOI
01 Aug 2020
TL;DR: Proactive Queue Management (PQM) takes advantage of the knowledge of the future FN topologies and the offered traffic to define in advance the queue size of the communications nodes over time, in order to maximize the throughput with stochastic delay guarantees.
Abstract: Besides the large amount of traffic that radio access and backhaul networks need to accommodate, the interest in low-latency communications is emerging. The goal is to reliably transmit high bitrates through the network under controlled delays, thus enabling human and machine-oriented communications. A critical aspect that must be addressed is the latency introduced by network queues. The literature has been focused on studying the queue size in wired networks, but wireless networks bring up additional challenges due to their dynamic characteristics. The problem is exacerbated in high dynamic networks, such as Flying Networks (FNs) composed of Unmanned Aerial Vehicles (UAVs), which have emerged to provide communications anywhere, anytime. The main contribution of this paper is a Proactive Queue Management (PQM) solution for FNs with controlled topology. PQM takes advantage of the knowledge of the future FN topologies and the offered traffic to define in advance the queue size of the communications nodes over time, in order to maximize the throughput with stochastic delay guarantees. The FN performance achieved using PQM is evaluated by means of ns-3 simulations, showing gains regarding aggregate throughput and average delay.

Proceedings ArticleDOI
17 Jan 2020
TL;DR: Through comparing two kinds of queuing system, this paper gives the supermarket the determination when the cashier weighing mergers, and when to separate the two conditions.
Abstract: As that large supermarkets combine cashier and weighing into one or use self-service all-in-one machine to alleviate the problem of customer queuing, this paper studies two queuing models: one is the queuing system with separate cashier and weighing, which can be represented by a mixed queuing system consisting of a two-stage series queuing and a single-stage queuing. The other is a queuing system combining the cashier and the weighing, which can be expressed as M/M/1. This paper fist analyzes the behavior of the customers, get the balance of the customer arrival rate in these two queueing systems. Further, this paper analyzes the optimal pricing problem of supermarket, and gets the average optimal price the supermarket shall set up. Finally, through comparing two kinds of queuing system, this paper gives the supermarket the determination when the cashier weighing mergers, and when to separate the two conditions.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: The queuing model from previous report and analysis is extended to make the ticket machine queuing system design application in graphical user interface by dragging and dropping the number of machines, position of machine and designated queue line for analysis and visualization in animation and graph analysis.
Abstract: This paper proposes the ticket machine queuing system design application in terms of service efficiency such as the average delay in queue and waiting time of the customer. This proposed application is applied in the queuing theory model and the observed main problem in rapid transit train system in Bangkok data for developing the application which can compare the service efficiency between existing system and desired system or new system that the operator would like to renew or improve the ticket machine system. The application can also be put on the adjustment parameter of service rate or number of the ticket machine, arrival rate of the system and designated position of queue or machine. We have extended the queuing model from previous report and analysis to make the ticket machine queuing system design application in graphical user interface by dragging and dropping the number of machines, position of machine and designated queue line for analysis and visualization in animation and graph analysis.

Journal ArticleDOI
15 Dec 2020
TL;DR: This article is devoted to some aspects of using the renovation mechanism with one or several thresholds as the mathematical models of active queue management mechanisms for queuing systems in which a threshold mechanism with renovation is implemented.
Abstract: This article is devoted to some aspects of using the renovation mechanism (different types of renovation are considered, definitions and brief overview are also given) with one or several thresholds as the mathematical models of active queue management mechanisms. The attention is paid to the queuing systems in which a threshold mechanism with renovation is implemented. This mechanism allows to adjust the number of packets in the system by dropping (resetting) them from the queue depending on the ratio of a certain control parameter with specified thresholds at the moment of the end of service on the device (server) (in contrast to standard RED-like algorithms, when a possible drop of a packet occurs at the time of arrivals of next packets in the system). The models with one, two and three thresholds with different types of renovation are under consideration. It is worth noting that the thresholds determine not only from which place in the buffer the packets are dropped, but also to which the reset of packets occurs. For some of the models certain analytical and numerical results are obtained (the references are given), some of them are only under investigation, so only the mathematical model and current results may be considered. Some results of comparing classic RED algorithm with renovation mechanism are presented.

Journal ArticleDOI
TL;DR: An analysis of the environment (supermarket) was carried out virtually through the discrete simulation technique, linked to the Arena software, and it was observed that the average length of stay in the queue would be reduced if it contemplates the inclusion of 2 to 3 employees to perform the service.
Abstract: This research presents a case study related to the management of queues in a supermarket and the problems found in the organization of this process. In order to reduce the time that customers remain in the checkout lines, an analysis of the environment (supermarket) was carried out virtually through the discrete simulation technique, linked to the Arena software. This technique is classified as quantitative because it makes it possible to measure entities and predict the action of the environment in a way that mimics the reality of the queues at the site. Through the simulated scenario, it was possible to identify the flaws in the process and the cause of the queuing. Through the results of the simulation, it was observed that the average length of stay in the queue would be reduced by 88.23% if it contemplates the inclusion of 2 to 3 employees to perform the service. Note that the application of this technique is favorable for problem solving and decision-making, as it reduces the time that customers spend in the queue and optimizes the financial investments allocated in this area.