scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2010"


Proceedings ArticleDOI
16 Jun 2010
TL;DR: DutyCon is proposed, a control theory-based dynamic duty cycle control approach that decomposes the end-to-end delay guarantee problem into a set of single-hop delay guarantee problems along each data flow in the network and designs the controller rigorously, based on feedback control theory, for analytic assurance of control accuracy and system stability.
Abstract: It is well known that periodically putting nodes into sleep can effectively save energy in wireless sensor networks, at the cost of increased communication delays. However, most existing work mainly focuses on static sleep scheduling, which cannot guarantee the desired delay when the network conditions change dynamically. In many applications with user-specified end-to-end delay requirements, the duty cycle of every node should be tuned individually at runtime based on the network conditions to achieve the desired end-to-end delay guarantees and energy efficiency. In this paper, we propose DutyCon, a control theory-based dynamic duty cycle control approach. DutyCon decomposes the end-to-end delay guarantee problem into a set of single-hop delay guarantee problems along each data flow in the network. We then formulate the single-hop delay guarantee problem as a dynamic feedback control problem and design the controller rigorously, based on feedback control theory, for analytic assurance of control accuracy and system stability. DutyCon also features a queuing delay adaptation scheme that adapts the duty cycle of each node to unpredictable packet rates, as well as a novel energy balancing approach that extends the network lifetime by dynamically adjusting the delay requirement allocated to each hop. Our empirical results on a hardware testbed demonstrate that DutyCon can effectively achieve the desired tradeoff between end-to-end delay and energy conservation. Extensive simulation results also show that DutyCon outperforms two baseline sleep scheduling protocols by having more energy savings while meeting the end-to-end delay requirements.

80 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: An energy saving algorithm is proposed by dynamically adjusting the working modes of BSs according to the traffic variation with respect to certain blocking probability requirement, and the performance is insensitive to the mode holding time within certain range.
Abstract: The energy consumption of information and communication technology (ICT) industry has become a serious problem, which mostly comes from the network infrastructure, rather than the mobile terminals. In this paper, we consider densely deployed cellular networks where the coverage of base stations (BSs) overlaps and the traffic intensity varies over time and space. An energy saving algorithm is proposed by dynamically adjusting the working modes (active or sleeping) of BSs according to the traffic variation with respect to certain blocking probability requirement. In addition, to prevent frequent mode switching, BSs are set to hold their current working modes for at least a given interval. Simulations demonstrate that the proposed strategy can greatly reduce energy consumption with blocking probability guarantee, and the performance is insensitive to the mode holding time within certain range.

79 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: This paper presents a comprehensive evaluation of the performance of four rate control mechanisms used by the MadWifi driver in Linux: Onoe, AMRR, SampleRate and minstrel.
Abstract: The 802.11 standards specify several transmission rates that can be used at the MAC layer protocol to adapt the transmission rate to channel conditions. Such dynamic adaptations can improve per-hop performance in Wireless Networks and therefore can have impact on the Quality of Service provided for communicating applications. In this paper we present a comprehensive evaluation of the performance of four rate control mechanisms used by the MadWifi driver in Linux: Onoe, AMRR, SampleRate and minstrel. The evaluation of these four rate control mechanisms was carried out in our platform for controllable and repeatable experiments.

36 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: This work considers a more realistic model where the moving speed and path for mobile sinks are constrained and proposes a number of motion stratifies for the mobile sink to gather real time data from static sensor network, with the objective to maximize the network lifetime.
Abstract: The benefits of using mobile sink to prolong sensor network lifetime have been well recognized. However, few provably theoretical results remain are developed due to the complexity caused by time-dependent network topology. In this work, we investigate the optimum routing strategy for the static sensor network. We further propose a number of motion stratifies for the mobile sink(s) to gather real time data from static sensor network, with the objective to maximize the network lifetime. Specially, we consider a more realistic model where the moving speed and path for mobile sinks are constrained. Our extensive experiments show that our scheme can significantly prolong entire network lifetime and reduce delivery delay.

36 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: Experimental results using TPC-W benchmark show vPnP can achieve different levels of tradeoff in a more flexible way than an existing two-layer feedback control approach, and shows its robustness over a variety of workloads.
Abstract: Both power and performance are important issues in today's datacenters. It is hard to achieve optimization in both aspects on shared infrastructures due to system dynamics. Previous work mostly emphasized on either aspect or relied on models that were trained off-line for specific workload. In this paper, we present vPnP, a feedback control-based coordination system that provides guarantees on a service level agreement with respect to performance and a power budget in virtualized environments. This system can adapt gracefully to workload change. It consists of two self-tuning model predictors and a utility function optimizer. The predictors correlate system resource allocation to power and performance, respectively. The optimizer finds the optimal solution for a tradeoff between power and performance. Experimental results using TPC-W benchmark show vPnP can achieve different levels of tradeoff in a more flexible way than an existing two-layer feedback control approach. More importantly, vPnP shows its robustness over a variety of workloads. It reduces performance relative deviation by 17% compared with the two-layer feedback controller.

35 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: A novel receiver-centric data transmission paradigm, which takes advantage of the tree structure that is naturally formed in data collection of a sensor network to assist scheduling of channel access, to improve communication throughput and fairness.
Abstract: Wireless sensor networks usually operate under light traffic loads. However, when an event is detected, a large volume of data may be generated and delivered to the sink. The demand for simultaneous data transmission may cause severe channel collision and thus decrease communication throughput in contention-based medium access control (MAC) protocols. In this paper, we introduce a novel receiver-centric data transmission paradigm, which takes advantage of the tree structure that is naturally formed in data collection of a sensor network to assist scheduling of channel access. On the tree structure, a receiver is able to coordinate its multiple senders' channel access so as to reduce channel contention and consequently improve communication throughput. The protocol seamlessly integrates scheduling with contention-based medium access control. In addition, to ensure reliable data transmission, we propose a sequence-based lost packet recovery scheme in a hop-by-hop recovery pattern, which could further improve communication throughput by reducing control overhead. We present the performance of our receiver-centric MAC protocol through measurements of an implementation in TinyOS on TelosB motes and extend the evaluation through ns-2 simulations. Compared with B-MAC and RI-MAC, we show the benefits of improving throughput and fairness through receiver-centric scheduling under heavy traffic loads.

31 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: This paper proposes an O(n) approximation algorithm of polynomial time complexity for the problem of wireless sensor network design by deploying a minimum number of additional relay nodes at a subset of given potential relay locations to minimize network design cost.
Abstract: In this paper, we study the problem of wireless sensor network design by deploying a minimum number of additional relay nodes (to minimize network design cost) at a subset of given potential relay locations in order to convey the data from already existing sensor nodes (hereafter called source nodes) to a Base Station within a certain specified mean delay bound. We formulate this problem in two different ways, and show that the problem is NP-Hard. For a problem in which the number of existing sensor nodes and potential relay locations is n, we propose an O(n) approximation algorithm of polynomial time complexity. Results show that the algorithm performs efficiently (in over 90% of the tested scenarios, it gave solutions that were either optimal or exceeding optimal just by one relay) in various randomly generated network scenarios.

26 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: An experimental evaluation of FiConn and FatTree, each respectively as a representative of hierarchical and flat architectures, in a three-tier transaction system using virtual machine (VM) based implementation casts a new light on the implication of virtualization in DCN architectures.
Abstract: In recent years, data center network (DCN) architectures (e.g., DCell [5], FiConn [6], BCube [4], FatTree [1], and VL2 [2]) received a surge of interest from both the industry and academia. However, none of existing efforts provide an in-depth understanding of the impact of these architectures on application performance in practical multi-tier systems under realistic workload. Moreover, it is also unclear how these architectures are affected in virtualized environments. In this paper, we fill this void by conducting an experimental evaluation of FiConn and FatTree, each respectively as a representative of hierarchical and flat architectures, in a three-tier transaction system using virtual machine (VM) based implementation. We observe several fundamental characteristics that are embedded in both classes of network topologies and cast a new light on the implication of virtualization in DCN architectures. Issues observed in this paper are generic and should be properly addressed by any DCN architectures before being considered for actual deployment, especially in mission-critical real-time transaction systems.

25 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: A distributed host based traffic collecting platform (DHTCP) is designed and Flexible Neural Trees (FNT) — a special kind of artificial neural network which has been successfully applied in many areas, is applied for traffic identification.
Abstract: Traditional traffic classification techniques like port-based and payload-based techniques are becoming ineffective owning to more and more Internet applications using dynamic port number and encryption techniques. Therefore, in the past few years, many researches have addressed machine learning-based techniques. Most researches of machine learning-based traffic identification use traffic samples collected on key nodes of networks for their learning. These samples do not have accurate application information i. e. the ground truth which is crucial for machine learning algorithms. In this paper, we first designed a distributed host based traffic collecting platform (DHTCP) to gather traffic samples with accurate application information on user hosts. Then we built a data set using DHTCP, and applied Flexible Neural Trees (FNT) — a special kind of artificial neural network which has been successfully applied in many areas, for traffic identification. Web and P2P traffics were studied in our work. Although the proposed technique is at an early stage of development, experimental results show that it is a promising solution of Internet traffic identification.

21 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: This paper develops PPVA, a working platform for universal and transparent peer-to-peer accelerating, and highlights the unique challenges in implementing such a platform, and discusses the PPVA solutions.
Abstract: Recent years have witnessed an explosion of online video sharing as a new killer Internet application. Yet, given limited network and server resources, user experience with existing video sharing sites are far from being satisfactory. To alleviate the bottleneck, peer-to-peer delivering has been suggested as an effective tool with success already seen in accelerating individual sites. The numerous video sharing sites existed however call for a universal solution that provides transparent peer-to-peer acceleration beyond ad hoc solutions. More importantly, only a universal platform can fully explore the aggregated video and client resources across sites, particular for identical videos replicated in diverse sites. To this end, we develop PPVA, a working platform for universal and transparent peer-to-peer accelerating. PPVA was first released in May 2008 and has since been constantly updated. As of January 2010, it has attracted over 50 million distinct clients, with 48 million daily transactions. In this paper, we highlight the unique challenges in implementing such a platform, and discuss the PPVA solutions. We have also constantly monitored the service of PPVA since its deployment. The mass amount of traces collected enables us to thoroughly investigate its effectiveness and potential drawbacks, and provide valuable guidelines to its future development.

21 citations


Proceedings ArticleDOI
16 Jun 2010
TL;DR: This paper studies a novel approach for achieving fair sharing of the network resources among TCP variants, using Rate-Delay (RD) Network Services, and shows that it is effective in providing fairness between loss-based NewReno and delay-based Vegas flows.
Abstract: While Transmission Control Protocol (TCP) variants with delay-based congestion control (e.g., TCP Vegas) provide low queueing delay and low packet loss, the key problem with their deployment on the Internet is their relative performance when competing with traditional TCP variants with loss-based congestion control (e.g., TCP NewReno). In particular, the more aggressive loss-based flows tend to dominate link buffer usage and degrade the throughput of delay-based flows. In this paper, we study a novel approach for achieving fair sharing of the network resources among TCP variants, using Rate-Delay (RD) Network Services. In particular, loss-based and delay-based flows are isolated from each other and served via different queues. Using extensive ns-2 network simulation experiments, we show that our approach is effective in providing fairness between loss-based NewReno and delay-based Vegas flows.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: Empirical results based on a hardware testbed and real trace files show that Virtual Batching can achieve the desired performance with more energy conservation than several well-designed baselines, e.g., 63% more, on average, than a solution based on DVFS only.
Abstract: Many power management strategies have been proposed for enterprise servers based on dynamic voltage and frequency scaling (DVFS), but those solutions cannot further reduce the energy consumption of a server when the server processor is already at the lowest DVFS level and the server utilization is still low (e.g., 5% or lower). To achieve improved energy efficiency, request batching can be conducted to group received requests into batches and put the processor into sleep between the batches. However, it is challenging to perform request batching on a virtualized server because different virtual machines on the same server may have different workload intensities. Hence, putting the shared processor into sleep may severely impact the performance of all the virtual machines. This paper proposes Virtual Batching, a novel request batching solution for virtualized servers with primarily light workloads. Our solution dynamically allocates CPU resources such that all the virtual machines can have approximately the same performance level relative to their allowed peak values. Based on this uniform level, our solution determines the time length for periodically batching incoming requests and putting the processor into sleep. When the workload intensity changes from light to moderate, request batching is automatically switched to DVFS to increase processor frequency for performance guarantees. Empirical results based on a hardware testbed and real trace files show that Virtual Batching can achieve the desired performance with more energy conservation than several well-designed baselines, e.g., 63% more, on average, than a solution based on DVFS only.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: The impact of consumer's price elasticity on ISP's optimal revenue and show that ISP should carry out a differentiated QoS protection strategy based on consumer'sprice elasticity in order to mitigate the revenue loss are quantified.
Abstract: Usage-based pricing has been recognized as a network congestion management tool. Internet Service Providers (ISPs), however, have limited ability to set time-adaptive usage-price to manage congestion arising from time-varying consumer utility for data. To achieve the maximum revenue, ISP can set its time-invariant usage-price low enough to aggressively encourage consumer's traffic demand. The downside is that ISP has to drop consumer's excessive traffic demand through congestion management (i.e., packet dropping), which may degrade Quality of Service (QoS) of consumer's traffic. Alternatively, to protect consumer's QoS, ISP can set its time-invariant usage-price high enough to reduce consumer's traffic demand, thus minimizing the need for congestion management through packet dropping. The downside is that ISP suffers a revenue loss due to the inefficient usage of its network. The tradeoff between ISP's revenue maximization and consumer's QoS protection motivates us to study ISP's revenue maximization subject to QoS constraint in terms of the number of packets dropped. We investigate two different QoS measures: short-term per-slot packet dropping constraint and long-term packet dropping constraint. The short-term constraint can be interpreted as a more transparent congestion management practice compared to the long-term constraint. We analyze ISP's optimal time-invariant pricing for both constraints, and develop an upper bound for the optimal revenue by considering the specified packet dropping threshold. We quantify the impact of consumer's price elasticity on ISP's optimal revenue and show that ISP should carry out a differentiated QoS protection strategy based on consumer's price elasticity in order to mitigate the revenue loss1.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: A novel collaborative sensing paradigm which integrates and supports wireless sensors and mobile phones with different communication standards is designed and a seamless integrated framework which minimizes the number of wireless sensors deployed, while providing high sensing quality and availability to satisfy the application requirements is proposed.
Abstract: Wireless sensor networks have been widely deployed to perform sensing constantly at specific locations, but their energy consumption and deployment cost are of great concern. With the popularity and advanced technologies of mobile phones, participatory urban sensing is a rising and promising field which utilizes mobile phones as mobile sensors to collect data, though it is hard to guarantee the sensing quality and availability under the dynamic behaviors and mobility of human beings. Based on the above observations, we suggest that wireless sensors and mobile phones can complement each other to perform collaborative sensing efficiently with satisfactory quality and availability. In this paper, a novel collaborative sensing paradigm which integrates and supports wireless sensors and mobile phones with different communication standards is designed. We propose a seamless integrated framework which minimizes the number of wireless sensors deployed, while providing high sensing quality and availability to satisfy the application requirements. The dynamic sensing behaviors and mobility of mobile phone participants make it extremely challenging to estimate their sensing quality and availability, so as to deploy the wireless sensors at the optimal locations to guarantee the sensing performance at a minimum cost. We introduce two mathematical models, a sensing quality evaluation model and a mobility prediction model, to predict the sensing quality and mobility of the mobile phone participants. We further propose a cost-effective sensor deployment algorithm to guarantee the required coverage probability and sensing quality for the system. Extensive simulations with real mobile traces demonstrate that the proposed paradigm can integrate wireless sensors and mobile phones seamlessly for satisfactory sensing quality and availability with minimized number of sensors.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: A prevalent technology, referred to as encryption file systems, has been proposed for solving the storage security problem, such as EFS, eCryptfs, and it is shown here how data is encrypted and decrypted transparently.
Abstract: Current applications often need frequent collaboration with related users, which result in more and more confidential data sharing on application server. Accordingly, assuring the security of shared data is one of the hot issues. A prevalent technology, referred to as encryption file systems, has been proposed for solving the storage security problem, such as EFS[1], eCryptfs[2]. Encryption file systems are often integrated into the corresponding operating systems and run in the kernel mode. With encryption file systems, data are encrypted and decrypted transparently. While most existing encryption file systems support either no sharing or only file-level sharing, it is impractical for use and cannot afford operation demand.

Proceedings ArticleDOI
04 Oct 2010
TL;DR: This paper describes how Software Performance Curves can be derived by a service provider that hosts a multi-tenant system and illustrates how it can be used to derive feasible performance guarantees, develop pricing functions, and minimize hardware resources.
Abstract: The upcoming business model of providing software as a service (SaaS) bears a lot of challenges to a service provider. On the one hand, service providers have to guarantee a certain quality of service (QoS) and ensure that they adhere to these guarantees at runtime. On the other hand, they have to minimize the total cost of ownership (TCO) of their IT landscape in order to offer competitive prices. The performance of a system is a critical attribute that affects QoS as well as TCO. However, the evaluation of performance characteristics is a complex task. Many existing solutions do not provide the accuracy required for offering dependable guarantees. One major reason for this is that the dependencies between the usage profile (provided by the service consumers) and the performance of the actual system is barely described sufficiently. Software Performance Curves are performance models that are derived by goal-oriented systematic measurements of the actual software service. In this paper, we describe how Software Performance Curves can be derived by a service provider that hosts a multi-tenant system. Moreover, we illustrate how Software Performance Curves can be used to derive feasible performance guarantees, develop pricing functions, and minimize hardware resources.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: The fact that waking up more sensor nodes cannot always help to improve the exploration results of TPGF in a duty-cycle based WSN is revealed, providing the meaningful direction for improving the application-requirement based QoS of stream data transmission in duty- cycle based wireless multimedia sensor networks.
Abstract: This paper focuses on studying the impacts of a duty-cycle based CKN sleep scheduling algorithm for our previous designed TPGF geographical multipath routing algorithm in wireless sensor networks (WSNs). It reveals the fact that waking up more sensor nodes cannot always help to improve the exploration results of TPGF in a duty-cycle based WSN. Furthermore, this study provides the meaningful direction for improving the application-requirement based QoS of stream data transmission in duty-cycle based wireless multimedia sensor networks.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: This work proposes to use Stochastic Games and Markov Decision Process (MDP) to model and analyze optimal peer strategies, in a selfish and a cooperative setting respectively, for a BitTorrent-like system with multiple files, and proposes an enhanced piece selection mechanism for Bit Torrent-like systems with dynamic download decision making.
Abstract: BitTorrent-like swarming technologies are very effective for popular content, but less so for the ‘long tail’ of files with disparate popularities, which do not have sufficiently many peers to enable efficient collaboration. Performance degradations are especially pronounced in swarms with reduced file availability. Static bundling groups files into a single data content. It requires no modification to the BitTorrent client, and has been shown to improve availability of unpopular files in BitTorrent swarms. However, as peers are forced to download undesired file pieces, download times increase, especially for peers downloading popular files. We propose to use Stochastic Games and Markov Decision Process (MDP) to model and analyze optimal peer strategies, in a selfish and a cooperative setting respectively, for a BitTorrent-like system with multiple files. Each peer wishes to download a subset of the files, and we allow peers to dynamically decide whether to collaborate with peers targeting a different set of files or not, given the current system state. The Stochastic Game and MPD models take into account both piece availability and average download times, and allow us to study if and when downloading unwanted content can be beneficial. We use dynamic programming to solve the two models, contrast the level of collaboration observed in the selfish and the cooperative settings, and propose an enhanced piece selection mechanism for BitTorrent-like systems with dynamic download decision making. We demonstrate the effectiveness of dynamic file piece selection through both simulations and experiments using a modified BitTorrent client.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: An intelligent price-based congestion control algorithm named IPC is proposed, which acts as an adaptive controller which is able to detect both incipient and current congestion proactively and adaptively under dynamic network conditions.
Abstract: Numerous active queue management (AQM) schemes have been proposed to stabilize the queue length in routers, but most of them lack adequate adaptability to TCP dynamics, due to the nonlinear and time-varying nature of communication networks. To deal with the above problems, we propose an intelligent price-based congestion control algorithm named IPC. IPC measures congestion through using an intelligent price derived from neural network. To meet the purpose of AQM, we design learning algorithms to optimize the weights of neural network and the key parameter of IPC automatically. IPC acts as an adaptive controller which is able to detect both incipient and current congestion proactively and adaptively under dynamic network conditions. The simulation results demonstrate that IPC significantly outperforms the well-known AQM algorithms in terms of stability, responsiveness and robustness over a wide range of network scenarios.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: The experimental results showed that different streaming approaches are more vulnerable in one network configuration than the others, and that the impact and effectiveness of the attack is not dependent on thenetwork size, but does highly depend on the network stability and the bandwidth availability of the polluters and the source.
Abstract: Peer-to-Peer (P2P) live streaming traffic has been growing at a phenomenal rate over the past few years. When the original streaming content is mixed with bogus data, the corresponding P2P streaming network is being subjected to a “pollution attack.” As the content is shared by peers, the bogus data can be spread widely in minutes. In this paper, we study the impact of a pollution attack in popular streaming models, under various network settings and configurations. The study was conducted in SPoIM, our emulation of real-world P2P streaming systems under pollution attacks, through which we observed that the feasibility of the attack is sensitive to the speed at which an attacker can modify content. Our experimental results showed that different streaming approaches are more vulnerable in one network configuration than the others, and that the impact and effectiveness of the attack is not dependent on the network size, but does highly depend on the network stability and the bandwidth availability of the polluters and the source. Based the experimental results, we suggested possible improvements in streaming models to defend themselves against the pollution attack. Finally, we examined possible defense mechanisms and demonstrated the effectiveness of a reputation-based defense mechanism against a typical pollution attack.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: In this paper, the effect of model transform on performance bounds in stochastic network calculus is investigated and guidance in model selection is provided for the selection of the best model for a given problem.
Abstract: Stochastic network calculus requires special care in the search of proper stochastic traffic arrival models and stochastic service models. Tradeoff must be considered between the feasibility for the analysis of performance bounds, the usefulness of performance bounds, and the ease of their numerical calculation. In theory, transform between different traffic arrival models and transform between different service models are possible. Nevertheless, the impact of the model transform on performance bounds has not been thoroughly investigated. This paper is to investigate the effect of the model transform and to provide practical guidance in the model selection in stochastic network calculus.

Proceedings ArticleDOI
04 Oct 2010
TL;DR: This work enhances an automated improvement approach to take into account bounds for quality of service in order to focus the search on interesting regions of the objective space, while still allowing trade-offs after the search.
Abstract: Quantitative prediction of non-functional properties, such as performance, reliability, and cost, of software architectures supports systematic software engineering. Even though there usually is a rough idea on bounds for quality of service, the exact required values may be unclear and subject to tradeoffs. Designing architectures that exhibit such good tradeoff between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. Automated approaches search the design space with multi-objective meta-heuristics such as evolutionary algorithms. However, as quality prediction for a single architecture is computationally expensive, these approaches are time consuming. In this work, we enhance an automated improvement approach to take into account bounds for quality of service in order to focus the search on interesting regions of the objective space, while still allowing trade-offs after the search. To validate our approach, we applied it to an architecture model of a component-based business information system. We compared the search to an unbounded search by running the optimization 8 times, each investigating around 800 candidates. The approach decreases the time needed to find good solutions in the interesting regions of the objective space by more than 35% on average.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: It is shown that for any constant ρ > 0, a sufficiently large number of channels ensure that the optimization problems are not ρ-approximable, unless P is equal to NP.
Abstract: We consider wireless telecommunications systems with orthogonal frequency bands, where each band is referred to as a channel, e.g., orthogonal frequency-division multiple access (OFDMA). For a given snap-shot in time, two joint channel assignment and power allocation optimization problems are presented, one in downlink and one in uplink. The objective is to maximize the minimum total Shannon capacity of any mobile user in the system, subject to system constraints. The corresponding decision problems are proved to be NP-hard. We also show that for any constant ρ > 0, a sufficiently large number of channels ensure that the optimization problems are not ρ-approximable, unless P is equal to NP.

Proceedings ArticleDOI
Takashi Isobe1, Satoshi Tsutsumi1, Koichiro Seto1, Kenji Aoshima1, Kazutoshi Kariya1 
16 Jun 2010
TL;DR: To reduce the circuit area in the one-chip architecture, high-efficient processing design was used and a switch downsized by sharing a port to exchange data with multiple blocks decreased the number of wires, and 166 MHz operating frequency required to realize 10 Gbps throughput at 64-bit pipeline was achieved.
Abstract: This paper proposes the one-chip architecture to mount all processes for TLS/SSL ciphered communication into one FPGA or ASIC, and shows the 10 Gbps implementation of low-power (23 W) TLS/SSL accelerator on 65 nm FPGA The usage of FPGA/ASIC enables high efficient processing and low-power consumption by using parallel, optimized and pipelined processing One-chip architecture achieves high throughput by using a switch to avoid the congestion in exchanging data between multiple processing-blocks In this research, to reduce the circuit area in the one-chip architecture, high-efficient processing design (a parallel processing circuit shared with multiple data, and a circuit shared in transmitting and receiving) was used In addition, to enhance the operating frequency, a switch downsized by sharing a port to exchange data with multiple blocks decreased the number of wires By means of these designs, circuit area to implement all TLS/SSL processes was reduced to less than that of 65 nm FPGA used in this research, and 166 MHz operating frequency required to realize 10 Gbps throughput at 64-bit pipeline was achieved In experimental evaluation using prototype, 23 W power consumption and 10 Gbps encryption throughput were achieved

Proceedings ArticleDOI
16 Jun 2010
TL;DR: This paper presents VMRPC, a light-weight RPC framework specifically designed for VMs that leverages heap and stack sharing to circumvent unnecessary data copying and serialization/deserilization, and achieve high performance.
Abstract: Despite advances in high performance inter-domain communication for virtual machines (VM), data intensive applications developed for VMs based on traditional remote procedure call (RPC) mechanism still suffer from performance degradation due to the inherent inefficiency of data serialization/deserilization operation. This paper presents VMRPC, a light-weight RPC framework specifically designed for VMs that leverages heap and stack sharing to circumvent unnecessary data copying and serialization/deserilization, and achieve high performance. Our evaluation shows that the performance of VMRPC is an order of magnitude better than traditional RPC systems and existing alternative inter-domain communication mechanisms. We adopt VMRPC in a real system, and the experiment results exhibit that the performance of VMRPC is even competitive to native environment.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: This paper presents a Feedback QoS based Model, called FQM, to successfully achieve power reduction without performance degradation, and demonstrates its effectiveness with different applications in terms of power consumption, QoS, and performance.
Abstract: Energy efficiency is essential to battery-powered (BP) mobile systems. However, existing energy efficiency techniques suffer from imbalance between system performance and power consumption. This paper presents a Feedback QoS based Model, called FQM, to successfully achieve power reduction without performance degradation. By observing system behavior via control variables, FQM applies pre-estimated policies to monitor and schedule I/O activities. We implement a prototype of FQM under Linux kernel and evaluate its effectiveness with different applications in terms of power consumption, QoS, and performance. Our experimental results show that FQM can effectively save energy while maintaining high QoS stability.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: Simulations and experiments on real multi core computing system show that the power potential of the system can be deeply explored while still providing QoS guarantees and the performance degradation is acceptable and fine-grained job-level power aware scheduling can achieve better power/performance balancing between multiple processors or cores than coarse- grained methods.
Abstract: With the scale of computing system increases, power consumption has become the major challenge to system performance, reliability and IT management costs. Specifically, system performance and reliability, described by various Quality of Service(QoS) metrics, cannot be guaranteed if the objective is to minimize the total power consumption solely, despite of the violations of QoS. Various methods have been developed to control power consumption to avoid system failures and thermal emergencies through coarse-grained designs. However, the existing methods can be improved and more power can be saved if fine-grained job level adaptation is integrated into them. In this paper a feedback control based power aware job scheduling algorithm is proposed to minimize power consumption in computing system and to provide QoS guarantees. In the proposed algorithm, jobs are scheduled according to the realtime and historical power consumption as well as the QoS requirements. Simulations and experiments on real multi core computing system show that the power potential of the system can be deeply explored while still providing QoS guarantees and the performance degradation is acceptable. The experiment results also show that fine-grained job-level power aware scheduling can achieve better power/performance balancing between multiple processors or cores than coarse-grained methods.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: This paper presents results from an extensive measurement study of various hardware and (virtualized) software routers using several queueing strategies, i.e. First-Come-First-Served and Fair Queueing and proposes an interpretation of router performance based on these parameters taking packet queueing and scheduling into account.
Abstract: In this paper we present results from an extensive measurement study of various hardware and (virtualized) software routers using several queueing strategies, i.e. First-Come-First-Served and Fair Queueing. In addition to well-known metrics such as packet forwarding performance, per packet processing time, and jitter, we apply network calculus models for performance analysis. This includes the Guaranteed Rate model for Integrated Services as well as the Packet Scale Rate Guarantee model for Differentiated Services. Using a measurement approach that provides a means to estimate rate and error term of a real node, we propose an interpretation of router performance based on these parameters taking packet queueing and scheduling into account. Such estimated parameters should be used to make the analysis of real networks more accurate. We underpin the applicability of this approach by comparing analytical results of concatenated routers to real world measurements.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: This work proposes a novel cross-layer design with a smart traffic split scheme, namely, Path Diversified Retransmission (PDR), which differentiates the original data packets and the retransmitted packets, and works with a novel QoS-aware multi-path routing protocol, QAOMDV, to distribute them separately.
Abstract: Path diversity exploits multiple routes simultaneously, achieving higher aggregated bandwidth and potentially decreasing delay and packet loss. Unfortunately, for TCP, naive load splitting often results in inaccurate estimation of round trip time (RTT) and packet reordering. As a result, it can suffer from significant instability or even throughput reduction. This is particular severe in Wireless Mesh Networks (WMNs), as validated by our analysis and simulation. To make multi-path TCP viable over WMNs, we propose a novel cross-layer design with a smart traffic split scheme, namely, Path Diversified Retransmission (PDR). PDR differentiates the original data packets and the retransmitted packets, and works with a novel QoS-aware multi-path routing protocol, QAOMDV, to distribute them separately. PDR does not suffer from the RTT underestimation and extra packet reordering, which ensures stable throughput improvement over single path routing. Through extensive simulations, we further demonstrate that, as compared to state-of-the-art multi-path protocols, our PDR with QAOMDV noticeably enhances the TCP throughput and reduces bandwidth fluctuation, with no obvious impact to fairness.

Proceedings ArticleDOI
16 Jun 2010
TL;DR: A lightweight hash-based algorithm called HCF (Hashed Credits Fair) is introduced to solve problems at the switch level while being transparent to the end users and it is shown that it can be readily implemented in data center switches with O(1) complexity and negligible overhead.
Abstract: Data center switches need to satisfy stringent low-delay and high-capacity requirements. To do so, they rely on small switch buffers. However, in case of congestion, data center switches can incur throughput collapse for short TCP flows as well as temporary starvation for long TCP flows. In this paper, we introduce a lightweight hash-based algorithm called HCF (Hashed Credits Fair) to solve these problems at the switch level while being transparent to the end users. We show that it can be readily implemented in data center switches with O(1) complexity and negligible overhead. We illustrate using simulations how HCF mitigates the throughput collapse of short flows. We also show how HCF reduces unfairness and starvation for long-lived TCP flows as well as for short TCP flows, yet maximizes the utilization on the congested link. Last, even though HCF can store packets of a same flow in different queues, we also prove that it prevents packet reordering.