scispace - formally typeset
Search or ask a question
Author

Peerapon Siripongwutikorn

Other affiliations: University of Pittsburgh
Bio: Peerapon Siripongwutikorn is an academic researcher from King Mongkut's University of Technology Thonburi. The author has contributed to research in topics: Vehicular ad hoc network & Bandwidth allocation. The author has an hindex of 6, co-authored 23 publications receiving 539 citations. Previous affiliations of Peerapon Siripongwutikorn include University of Pittsburgh.

Papers
More filters
Journal Article•DOI•
TL;DR: Numerical results comparing several SCA algorithms show that SSR has the best trade-off between solution optimality and computation speed.
Abstract: The design of survivable mesh based communication networks has received considerable attention in recent years. One task is to route backup paths and allocate spare capacity in the network to guarantee seamless communications services survivable to a set of failure scenarios. This is a complex multi-constraint optimization problem, called the spare capacity allocation (SCA) problem. This paper unravels the SCA problem structure using a matrix-based model, and develops a fast and efficient approximation algorithm, termed successive survivable routing (SSR). First, per-flow spare capacity sharing is captured by a spare provision matrix (SPM) method. The SPM matrix has a dimension the number of failure scenarios by the number of links. It is used by each demand to route the backup path and share spare capacity with other backup paths. Next, based on a special link metric calculated from SPM, SSR iteratively routes/updates backup paths in order to minimize the cost of total spare capacity. A backup path can be further updated as long as it is not carrying any traffic. Furthermore, the SPM method and SSR algorithm are generalized from protecting all single link failures to any arbitrary link failures such as those generated by Shared Risk Link Groups or all single node failures. Numerical results comparing several SCA algorithms show that SSR has the best trade-off between solution optimality and computation speed.

237 citations

Proceedings Article•DOI•
22 Apr 2001
TL;DR: Numerical results comparing several SCA algorithms show that SSR has the best trade-off between solution optimality and computation speed.
Abstract: Spare capacity allocation (SCA) is an important part of a fault tolerant network design. In the spare capacity allocation problem one seeks to determine where to place spare capacity in the network and how much spare capacity must be allocated to guarantee seamless communications services survivable to a set of failure scenarios (e.g., any single link failure). Formulated as a multi-commodity flow integer programming problem, SCA is known to be NP-hard. We provide a two-pronged attack to approximate the optimal SCA solution: unravel the SCA structure and find an effective algorithm. First, a literature review on the SCA problem and its algorithms is provided. Second, a integer programming model for SCA is provided. Third, a simulated annealing algorithm using the above INP model is introduced. Next, the structure of SCA is modeled by a matrix method. The per-flow based backup path information are aggregated into a square matrix, called the spare provision matrix (SPM). The size of the SPM is the number of links. Using the SPM as the state information, a new adaptive algorithm is then developed to approximate the optimal SCA solution termed successive survivable routing (SSR). SSR routes link-disjoint backup paths for each traffic flow one at a time. Each flow keeps updating its backup path according to the current network state as long as the backup path is not carrying any traffic. In this way, SSR can be implemented by shortest path algorithms using advertised state information with complexity of O( Link/sup 2/). The analysis also shows that SSR is using a necessary condition of the optimal solution. The numerical results show that SSR has near optimal spare capacity allocation with substantial advantages in computation speed.

177 citations

Journal Article•DOI•
TL;DR: Different ABC algorithms that guarantee aggregate traffic packet-level QoS metrics, such as the average queue length, packet loss, and packet delay are described, and some comparative performance evaluation results are provided.
Abstract: In packet-switched network traffic management and control, efficiently allocating bandwidth to provide quantitative packet-level QoS to aggregate traffic has been difficult due to unpredictable, unknown statistical characteristics of the aggregate traffic. With inaccurate traffic information, using static bandwidth allocation results in the network being underutilized, or the QoS requirement not being satisfied. An alternative is to use adaptive bandwidth control (ABC), whereby the allocated bandwidth is regularly adjusted over the packet-level time scale to attain a given QoS requirement. This paper provides a literature review of ABC algorithms that guarantee aggregate traffic packet-level QoS metrics, such as the average queue length, packet loss, and packet delay. We describe different ABC algorithms, identify their advantages and shortcomings, and provide some comparative performance evaluation results. Open issues in ABC for future research directions are also discussed.

49 citations

Proceedings Article•DOI•
17 Nov 2002
TL;DR: An adaptive bandwidth control algorithm that efficiently provides an aggregate loss guarantee to resolve the problem of inefficient bandwidth allocation due to incomplete, inaccurate traffic descriptors supplied by users is proposed.
Abstract: The paper proposes an adaptive bandwidth control algorithm that efficiently provides an aggregate loss guarantee to resolve the problem of inefficient bandwidth allocation due to incomplete, inaccurate traffic descriptors supplied by users Because the control attempts to allocate only just enough bandwidth to meet the QoS requirement, the amount of bandwidth saving compared to static allocation can be substantial Another distinct advantage of our control algorithm is that no a priori information on the traffic characteristics of the aggregate is required From the simulation study, the proposed control can maintain the packet loss QoS while attaining very high utilization, and is robust against different system configurations and controller parameters

23 citations

Proceedings Article•DOI•
17 Nov 2002
TL;DR: It is found that per-flow delay statistics can be very different from the corresponding class delay statistics, depending on flow burstiness, overall traffic load, as well as the queue discipline.
Abstract: Class-based traffic treatment frameworks such as differentiated service (DiffServ) have been proposed to resolve the poor scalability problem in the flow-based approach. Although the performance is differentiated in a class-based basis, the performance seen by individual flows in the same class may differ from that seen by the class and has not been well understood. We investigate this issue by simulation in a single node under FIFO, static priority, waiting time priority, and weighted fair queueing scheduling schemes. Our results indicate that such performance discrepancy occurs especially when flows joining the same class are heterogeneous, which is not uncommon considering that the same type of applications can generate traffic having very different statistical behaviors such as video traffic with different activity levels, or voice traffic with different compression schemes. We found that per-flow delay statistics, including the average and the 99/sup th/ percentile delay, can be very different from the corresponding class delay statistics, depending on flow burstiness, overall traffic load, as well as the queue discipline. We also propose a solution to reduce the mean delay variance experienced by flows in the same class.

14 citations


Cited by
More filters
Proceedings Article•DOI•
06 Jul 2009
TL;DR: In this paper, the authors present an approach for studying computer service performance in cloud computing, in an effort to deliver QoS guaranteed services in such a computing environment, they find the relationship among the maximal number of customers, the minimal service resources and the highest level of services.
Abstract: Cloud computing is a new cost-efficient computing paradigm in which information and computer power can be accessed from aWeb browser by customers. Understanding the characteristics of computer service performance has become critical for service applications in cloud computing. For the commercial success of this new computing paradigm, the ability to deliver Quality of Services (QoS) guaranteed services is crucial. In this paper, we present an approach for studying computer service performance in cloud computing. Specifically, in an effort to deliver QoS guaranteed services in such a computing environment, we find the relationship among the maximal number of customers, the minimal service resources and the highest level of services. The obtained results provide the guidelines of computer service performance in cloud computing that would be greatly useful in the design of this new computing paradigm.

275 citations

Journal Article•DOI•
TL;DR: It is proved that the problem of finding an eligible pair of working and backup paths for a new lightpath request requiring shared-path protection under the current network state is NP-complete and a heuristic is developed to compute a feasible solution with high probability.
Abstract: This paper investigates the problem of dynamic survivable lightpath provisioning in optical mesh networks employing wavelength-division multiplexing (WDM). In particular, we focus on shared-path protection because it is resource efficient due to the fact that backup paths can share wavelength links when their corresponding working paths are mutually diverse. Our main contributions are as follows. 1) First, we prove that the problem of finding an eligible pair of working and backup paths for a new lightpath request requiring shared-path protection under the current network state is NP-complete. 2) Then, we develop a heuristic, called CAFES, to compute a feasible solution with high probability. 3) Finally, we design another heuristic, called OPT, to optimize resource consumption for a given solution. The merits of our approaches are that they capture the essence of shared-path protection and approach to optimal solutions without enumerating paths. We evaluate the effectiveness of our heuristics and the results are found to be promising.

247 citations

Journal Article•DOI•
TL;DR: The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers.
Abstract: An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS) The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers

240 citations

Journal Article•DOI•
TL;DR: Numerical results comparing several SCA algorithms show that SSR has the best trade-off between solution optimality and computation speed.
Abstract: The design of survivable mesh based communication networks has received considerable attention in recent years. One task is to route backup paths and allocate spare capacity in the network to guarantee seamless communications services survivable to a set of failure scenarios. This is a complex multi-constraint optimization problem, called the spare capacity allocation (SCA) problem. This paper unravels the SCA problem structure using a matrix-based model, and develops a fast and efficient approximation algorithm, termed successive survivable routing (SSR). First, per-flow spare capacity sharing is captured by a spare provision matrix (SPM) method. The SPM matrix has a dimension the number of failure scenarios by the number of links. It is used by each demand to route the backup path and share spare capacity with other backup paths. Next, based on a special link metric calculated from SPM, SSR iteratively routes/updates backup paths in order to minimize the cost of total spare capacity. A backup path can be further updated as long as it is not carrying any traffic. Furthermore, the SPM method and SSR algorithm are generalized from protecting all single link failures to any arbitrary link failures such as those generated by Shared Risk Link Groups or all single node failures. Numerical results comparing several SCA algorithms show that SSR has the best trade-off between solution optimality and computation speed.

237 citations

Proceedings Article•DOI•
07 Nov 2002
TL;DR: This work proposes an efficient path-selection algorithm for restoration of connections over shared bandwidth in a fully distributedGMPLS architecture and describes how to extend GMPLS signaling protocols to collect the necessary information efficiently.
Abstract: In MPLS/GMPLS networks, a range of restoration schemes are required to support different tradeoffs between service interruption time and network resource utilization. In light of these tradeoffs, path-based, end-to-end shared restoration provides a very attractive solution. However, efficient use of capacity for shared restoration strongly relies on the selection procedure of restoration paths. We propose an efficient path-selection algorithm for restoration of connections over shared bandwidth in a fully distributed GMPLS architecture. We also describe how to extend GMPLS signaling protocols to collect the necessary information efficiently. To evaluate the algorithm's performance, we compare it via simulation with two other well-known algorithm on a typical intercity backbone network. The key figure-of-merit for restoration capacity efficiency is restoration overbuild, i.e., the extra capacity required to meet the network restoration objective as a percentage of the capacity of the network with no restoration. Our simulation results show that our algorithm uses significantly less restoration overbuild (63-68%) compared to the other two algorithms (83-90%).

184 citations