scispace - formally typeset
Search or ask a question

Showing papers on "Network management published in 2010"


Journal ArticleDOI
TL;DR: In this paper, the authors address the question of whether managerial strategies matter for outcomes and also explore which types of strategies have an effect on outcomes and find that the number of employed network management strategies has a strong effect on perceived outcomes.
Abstract: There is a large amount of literature and research on network management strategies. However, only a limited portion of this literature examines the relationship between network management strategies and outcomes (for an exception, see Meier and O’Toole 2001). Most of the research focuses on managerial activity or networking rather than on the question of which types of strategies matter the most for outcomes of complex processes in networks. This paper attempts to address the question of whether managerial strategies matter for outcomes and also explores which types of strategies have an effect on outcomes. The research is based on a survey sent to respondents involved in environmental projects in The Netherlands. The findings show that the number of employed network management strategies has a strong effect on perceived outcomes. The few variations in the effect of four constructed types of network management strategies found include exploring content, connecting, arranging, and process agreements.

384 citations


Journal ArticleDOI
TL;DR: In this article, a Web-based survey of respondents involved in environmental projects showed that trust does matter for perceived outcomes and that active network management strategies enhance the level of trust in networks.
Abstract: Governance networks are characterized by complex interaction and decision making, and much uncertainty. Surprisingly, there is very little research on the impact of trust in achieving results in governance networks. This article asks two questions: (a) Does trust influence the outcomes of environmental projects? and (b) Does active network management improve the level of trust in networks? The study is based on a Web-based survey of respondents involved in environmental projects. The results indicate that trust does matter for perceived outcomes and that network management strategies enhance the level of trust.

304 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: This work is the first to accurately infer, for any UMTS network, the state machine that guides the radio resource allocation policy through a light-weight probing scheme, and explores the optimal state machine settings in terms of several critical timer values evaluated using real network traces.
Abstract: 3G cellular data networks have recently witnessed explosive growth. In this work, we focus on UMTS, one of the most popular 3G mobile communication technologies. Our work is the first to accurately infer, for any UMTS network, the state machine (both transitions and timer values) that guides the radio resource allocation policy through a light-weight probing scheme. We systematically characterize the impact of operational state machine settings by analyzing traces collected from a commercial UMTS network, and pinpoint the inefficiencies caused by the interplay between smartphone applications and the state machine behavior. Besides basic characterizations, we explore the optimal state machine settings in terms of several critical timer values evaluated using real network traces. Our findings suggest that the fundamental limitation of the current state machine design is its static nature of treating all traffic according to the same inactivity timers, making it difficult to balance tradeoffs among radio resource usage efficiency, network management overhead, device radio energy consumption, and performance. To the best of our knowledge, our work is the first empirical study that employs real cellular traces to investigate the optimality of UMTS state machine configurations. Our analysis also demonstrates that traffic patterns impose significant impact on radio resource and energy consumption. In particular, We propose a simple improvement that reduces YouTube streaming energy by 80% by leveraging an existing feature called fast dormancy supported by the 3GPP specifications.

299 citations


Proceedings ArticleDOI
30 Nov 2010
TL;DR: Extensive simulations based on both random topologies and real network topologies of a physical testbed demonstrate that C-LLF is highly effective in meeting end-to-end deadlines in WirelessHART networks, and significantly outperforms common real-time scheduling policies.
Abstract: WirelessHART is an open wireless sensor-actuator network standard for industrial process monitoring and control that requires real-time data communication between sensor and actuator devices. Salient features of a WirelessHART network include a centralized network management architecture, multi-channel TDMA transmission, redundant routes, and avoidance of spatial reuse of channels for enhanced reliability and real-time performance. This paper makes several key contributions to real-time transmission scheduling in WirelessHART networks: (1) formulation of the end-to-end real-time transmission scheduling problem based on the characteristics of WirelessHART, (2) proof of NP-hardness of the problem, (3) an optimal branch-and-bound scheduling algorithm based on a necessary condition for schedulability, and (4) an efficient and practical heuristic-based scheduling algorithm called Conflict-aware Least Laxity First (C-LLF). Extensive simulations based on both random topologies and real network topologies of a physical testbed demonstrate that C-LLF is highly effective in meeting end-to-end deadlines in WirelessHART networks, and significantly outperforms common real-time scheduling policies.

276 citations


04 Dec 2010
TL;DR: This paper proposes Maestro which keeps the simple programming model for programmers, and exploits parallelism in every corner together with additional throughput optimization techniques, and experimentally shows that the throughput of Maestro can achieve near linear scalability on an eight core server machine.
Abstract: The fundamental feature of an OpenFlow network is that the controller is responsible for the initial establishment of every flow by contacting related switches. Thus the performance of the controller could be a bottleneck. This paper shows how this fundamental problem is addressed by parallelism. The state of the art OpenFlow controller, called NOX, achieves a simple programming model for control function development by having a single-threaded event-loop. Yet NOX has not considered exploiting parallelism. We propose Maestro which keeps the simple programming model for programmers, and exploits parallelism in every corner together with additional throughput optimization techniques. We experimentally show that the throughput of Maestro can achieve near linear scalability on an eight core server machine. Keywords-OpenFlow, network management, multithreading, performance optimization

263 citations


Journal ArticleDOI
TL;DR: A flexible wireless smart sensor framework for full-scale, autonomous SHM that integrates the necessary software and hardware while addressing key implementation requirements is developed and validated on a full- scale a cable-stayed bridge in South Korea.
Abstract: Wireless smart sensors enable new approaches to improve structural health monitoring (SHM) practices through the use of distributed data processing. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While much of the technology associated with smart sensors has been available for nearly a decade, there have been limited numbers of full- scale implementations due to the lack of critical hardware and software elements. This research develops a flexible wireless smart sensor framework for full-scale, autonomous SHM that integrates the necessary software and hardware while addressing key implementation requirements. The Imote2 smart sensor platform is employed, providing the computation and communication resources that support demanding sensor network applications such as SHM of civil infrastructure. A multi-metric Imote2 sensor board with onboard signal processing specifically designed for SHM applications has been designed and validated. The framework software is based on a service-oriented architecture that is modular, reusable and extensible, thus allowing engineers to more readily realize the potential of smart sensor technology. Flexible network management software combines a sleep/wake cycle for enhanced power efficiency with threshold detection for triggering network wide operations such as synchronized sensing or decentralized modal analysis. The framework developed in this research has been validated on a full-scale a cable-stayed bridge in South Korea.

235 citations


Journal ArticleDOI
TL;DR: The dependency of the traffic classification performance on the amount and composition of training data is investigated followed by experiments that show that ML algorithms such as Bayesian Networks and Decision Trees are suitable for Internet traffic flow classification at a high speed, and prove to be robust with respect to applications that dynamically change their source ports.

162 citations


Proceedings ArticleDOI
06 Jun 2010
TL;DR: The design and implementation of ExSPAN is presented, a generic and extensible framework that achieves efficient network provenance in a distributed environment and demonstrates that the system supports a wide range of distributed provenance computations efficiently, resulting in significant reductions in bandwidth costs compared to traditional approaches.
Abstract: Network accountability, forensic analysis, and failure diagnosis are becoming increasingly important for network management and security. Such capabilities often utilize network provenance - the ability to issue queries over network meta-data. For example, network provenance may be used to trace the path a message traverses on the network as well as to determine how message data were derived and which parties were involved in its derivation. This paper presents the design and implementation of ExSPAN, a generic and extensible framework that achieves efficient network provenance in a distributed environment. We utilize the database notion of data provenance to "explain" the existence of any network state, providing a versatile mechanism for network provenance. To achieve such flexibility at Internet-scale, ExSPAN uses declarative networking in which network protocols can be modeled as continuous queries over distributed streams and specified concisely in a declarative query language. We extend existing data models for provenance developed in database literature to enable distribution at Internet-scale, and investigate numerous optimization techniques to maintain and query distributed network provenance efficiently. The ExSPAN prototype is developed using RapidNet, a declarative networking platform based on the emerging ns-3 toolkit. Experiments over a simulated network and an actual deployment in a testbed environment demonstrate that our system supports a wide range of distributed provenance computations efficiently, resulting in significant reductions in bandwidth costs compared to traditional approaches.

150 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study how leaders of successful networks manage collaboration challenges to make things happen in action networks, and find that leaders in successful networks are the ones who make the decisions that lead to success.
Abstract: Qualitative evidence from action networks is used to answer the research question, How do leaders of successful networks manage collaboration challenges to make things happen? This study of two urb...

131 citations


Proceedings ArticleDOI
26 Feb 2010
TL;DR: This paper introduces a novel approach through using Auto-Regressive Integrated Moving Average (ARIMA) technique to detect potential attacks that may occur in the network and with sufficient development; an automated defensive solution can be achieved.
Abstract: An early warning system on potential attacks from networks will enable network administrators or even automated network management software to take preventive measures. This is needed as we move towards maximizing the utilization of the network with new paradigms such as Web Services and Software As A Service. This paper introduces a novel approach through using Auto-Regressive Integrated Moving Average (ARIMA) technique to detect potential attacks that may occur in the network. The solution is able to provide feedback through its predictive capabilities and hence provide an early warning system. With the affirmative results, this technique can serve beyond the detection of Denial of Service (DoS) and with sufficient development; an automated defensive solution can be achieved.

126 citations


Journal ArticleDOI
TL;DR: An energy-efficient distributed clustering protocol for wireless sensor networks, based on a metric for characterizing the significance of a node, w.r.t. its contribution in relaying messages is proposed and Experimental results attest that the protocol improves network longevity.

Patent
10 Jun 2010
TL;DR: In this article, the authors proposed a real-time network management procedure for multimedia streaming traffic in wireless networks and clusters of independent networks respectively, which is based on the Heterogeneous Service Creation (HSE) system.
Abstract: The invention is directed to network management systems and methods that provide substantially real-time network management and control capabilities of multimedia streaming traffic in telecommunications networks. The invention provides pre-emptive and autonomous network management and control capabilities, and may include shared intelligence of embedded systems—Heterogeneous Sensor Entities (HSE) and the Sensor Service Management (SSM) system. HSEs are distributed real-time embedded systems provisioned in various network elements. HSEs performs fault, configuration, accounting, performance and security network management functions in real-time; and real-time network management control activations and removals. SSM facilitates automated decision making, rapid deployment of HSEs and real-time provisioning of network management and control services. The service communication framework amongst various HSEs and the SSM is provided by the Heterogeneous Service Creation system. The proposed network management procedure provides real-time network management and control capabilities of multimedia traffic in wireless networks and clusters of independent networks respectively.

Book
07 Dec 2010
Abstract: Optical WDM networking technology is spearheading a bandwidth revolution in the networking infrastructure being developed for the next generation Internet. Rapid advances in optical components have enabled the transition from point-to-point WDM links to all-optical networking. Optical WDM Networks: Principles and Practice presents some of the most important challenges facing the optical networking community, along with some suggested solutions. Earlier textbooks in optical networking have a narrower perspective, and rapidly advancing research has created the need for fresh and current information on problems and issues in the field. The volume editors and contributing authors have endeavoured to capture a substantial subset of the key problems and known solutions to these problems. All of the chapters are original contributions from leading international researchers. The chapters address a wide variety of topics, including the state of the art in WDM technology, physical components that make up WDM fiber-optic networks, medium access protocols, wavelength routed networks, optical access networks, network management, and performance evaluation of wavelength routing networks. The chapters also survey critical points in past research and tackle more recent problems. Practitioners and network product engineers interested in current state-of-the-art information beyond textbook-type coverage, and graduate students commencing research in this area, will appreciate the concise - and pertinent - information presented herein.

Posted Content
01 Jan 2010
TL;DR: Barbara van Schewick as discussed by the authors explores the economic consequences of Internet architecture, offering a detailed analysis of how it affects the economic environment for innovation and concludes that the original architecture of the Internet fostered application innovation.
Abstract: The Internet's remarkable growth has been fueled by innovation. New applications continually enable new ways of using the Internet, and new physical networking technologies increase the range of networks over which the Internet can run. Questions about the relationship between innovation and the Internet's architecture have shaped the debates over open access to broadband networks, network neutrality, nondiscriminatory network management, and future Internet architecture. In Internet Architecture and Innovation, Barbara van Schewick explores the economic consequences of Internet architecture, offering a detailed analysis of how it affects the economic environment for innovation. Van Schewick describes the design principles on which the Internet's original architecture was based—modularity, layering, and the end-to-end arguments—and shows how they shaped the original architecture of the Internet. She analyzes in detail how the original Internet architecture affected innovation—in particular, the development of new applications—and the how changing the architecture would affect this kind of innovation. Van Schewick concludes that the original architecture of the Internet fostered application innovation. Current changes that deviate from the Internet's original design principles reduce the amount and quality of application innovation, limit users' ability to use the Internet as they see fit, and threaten the Internet's ability to realize its economic, social, cultural, and political potential. If left to themselves, network providers will continue to change the internal structure of the Internet in ways that are good for them but not necessarily for the rest of us. Government intervention may be needed to save the social benefits associated with the Internet's original design principles.

Patent
19 Jan 2010
TL;DR: In this article, a power-saving network management server is connected to a network system provided with a network device and manages the state of power to the network device, wherein work servers are connected in the network system, and the power saving server updates network configuration information and work allocation information according to received neighboring device information and information relating to allocation of work to the work servers.
Abstract: A power-saving network management server connected to a network system provided with a network device and managing the state of power to the network device, wherein work servers are connected in the network system, and the power-saving network management server updates network configuration information and work allocation information according to received neighboring device information and information relating to allocation of work to the work servers, determines whether to start or halt supply of power to ports on the network device according to the updated network configuration information and work allocation information, and controls the supply of power to those ports on the network device according to the port determination results.

Book ChapterDOI
01 Jan 2010
TL;DR: An experimental evaluation is presented that demonstrates the feasibility of building next-generation Cloud provisioning systems based on peer-to-peer network management and information dissemination models and the design and implementation of novel, extensible software fabric (Cloud peer) that combines public/private clouds, overlay networking, and structured peer- to-peer indexing techniques.
Abstract: Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-demand renting of computing or storage services for hosting application services that experience highly variable workloads and requires high availability and performance. Interconnecting Cloud computing system components (servers, virtual machines (VMs), application services) through peer-to-peer routing and information dissemination structure are essential to avoid the problems of provisioning efficiency bottleneck and single point of failure that are predominantly associated with traditional centralized or hierarchical approaches. These limitations can be overcome by connecting Cloud system components using a structured peer-to-peer network model (such as distributed hash tables (DHTs)). DHTs offer deterministic information/query routing and discovery with close to logarithmic bounds as regards network message complexity. By maintaining a small routing state of O (log n) per VM, a DHT structure can guarantee deterministic look-ups in a completely decentralized and distributed manner. This chapter presents: (i) a layered peer-to-peer Cloud provisioning architecture; (ii) a summary of the current state-of-the-art in Cloud provisioning with particular emphasis on service discovery and load-balancing; (iii) a classification of the existing peer-to-peer network management model with focus on extending the DHTs for indexing and managing complex provisioning information; and (iv) the design and implementation of novel, extensible software fabric (Cloud peer) that combines public/private clouds, overlay networking, and structured peer-to-peer indexing techniques for supporting scalable and self-managing service discovery and load-balancing in Cloud computing environments. Finally, an experimental evaluation is presented that demonstrates the feasibility of building next-generation Cloud provisioning systems based on peer-to-peer network management and information dissemination models. The experimental test-bed has been deployed on a public cloud computing platform, Amazon EC2, which demonstrates the effectiveness of the proposed peer-to-peer Cloud provisioning software fabric.

Journal ArticleDOI
TL;DR: A performance evaluation of Pathload, Pathchirp, Spruce, IGI, and Abing in a low cost and flexible test bed demonstrates that ABETTs are far from being ready to be applied in all these applications and scenarios.

Patent
13 Aug 2010
TL;DR: In this paper, the authors described a system and methods for an apparatus comprising a modem component, a wireless communications component, at least one processor, and at least a tangible electronic memory storing data and numerous computer-executable modules to enable wireless hotspots with multiple network identifiers.
Abstract: In accordance with various aspects of the disclosure, systems and methods are illustrated for an apparatus comprising a modem component, a wireless communications component, at least one processor, and at least one tangible electronic memory storing data and numerous computer-executable modules to enable wireless hotspots with multiple network identifiers. Examples of at least some of the computer-executable modules includes, but is not limited to an input module, network identifier module, session management module, network management module, automatic location management module, authentication module, bandwidth negotiation module, billing interface module, and activity-based location module.

Journal ArticleDOI
TL;DR: This work introduces BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management that solves many of the nomenclature issues common to systems dealing with biological data
Abstract: Background: The analysis and usage of biological data is hindered by the spread of information across multiple repositories and the difficulties posed by different nomenclature systems and storage formats. In particular, there is an important need for data unification in the study and use of protein-protein interactions. Without good integration strategies, it is difficult to analyze the whole set of available data and its properties. Results: We introduce BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management. BIANA is a Python framework designed to achieve two major goals: i) the integration of multiple sources of biological information, including biological entities and their relationships, and ii) the management of biological information as a network where entities are nodes and relationships are edges. Moreover, BIANA uses properties of proteins and genes to infer latent biomolecular relationships by transferring edges to entities sharing similar properties. BIANA is also provided as a plugin for Cytoscape, which allows users to visualize and interactively manage the data. A web interface to BIANA providing basic functionalities is also available. The software can be downloaded under GNU GPL license from http://sbi.imim.es/web/BIANA.php. Conclusions: BIANA’s approach to data unification solves many of the nomenclature issues common to systems dealing with biological data. BIANA can easily be extended to handle new specific data repositories and new specific data types. The unification protocol allows BIANA to be a flexible tool suitable for different user requirements: non-expert users can use a suggested unification protocol while expert users can define their own specific unification rules.

Book
30 Apr 2010
TL;DR: This book provides practical solutions that integrate network management and QoS strategies for real-world application and offers fundamental knowledge on the subject.
Abstract: Intelligent Quality of Service Technologies and Network Management: Models for Enhancing Communication explores the interrelated natures of network control mechanisms and QoS and offers fundamental knowledge on the subject, describing the significance of network management and the integration of knowledge to demonstrate how network management is related to QoS in real applications. Engaging and innovative, this book provides practical solutions that integrate network management and QoS strategies for real-world application.

Journal ArticleDOI
TL;DR: In this article, a logistic regression model is constructed to explain the probability of network formation; five major groups of explanatory variables are included: institutional, programmatic, managerial, political, and socioeconomic.
Abstract: There is a small, although well-established body of literature examining network performance and accountability. In addition, there are relatively few studies which examine potential factors for determining network formation. The current study provides a systematic analysis of network formation determinants. A logistic regression model is constructed to explain the probability of network formation; five major groups of explanatory variables are included: institutional, programmatic, managerial, political, and socioeconomic. Data for this study were collected between 2003 and 2005 from 411 programs at the subnational governance level of Thailand as part of a larger study on the management of local governments. The analysis shows that the most significant variables in determining network formation include the nature of the programs and management capacity. Local political climate also has a significant effect on network formation, but only indirectly. The study also reveals that collaborations in educational and cultural promotion programs are still restricted, which differs from the experiences in many developed countries. This study illustrates the importance of programmatic, managerial, and political contexts that administrators may consider when forming networks.

Proceedings ArticleDOI
30 Nov 2010
TL;DR: Coolaid as discussed by the authors is a system under which the domain knowledge of device vendors and service providers is formally captured by a declarative language through effcient and powerful rule-based reasoning on top of a database-like abstraction over a network of devices.
Abstract: Network management and operations are complicated, tedious, and error-prone, requiring signifcant human involvement and domain knowledge. As the complexity involved inevitably grows due to larger scale networks and more complex protocol features, human operators are increasingly short-handed, despite the best effort from existing support systems to make it otherwise. This paper presents coolaid, a system under which the domain knowledge of device vendors and service providers is formally captured by a declarative language. Through effcient and powerful rule-based reasoning on top of a database-like abstraction over a network of devices, coolaid enables new management primitives to perform network-wide reasoning, prevent misconfguration, and automate network confguration, while requiring minimum operator effort. We describe the design and prototype implementation of coolaid, and demonstrate its effectiveness and scalability through various realistic network management tasks.

Patent
09 Feb 2010
TL;DR: In this article, a network management node may manage a network of base stations and wireless transmit/receive units (WTRUs) that operate using diverse radio access technologies, such as VHF or UHF spectrum.
Abstract: A network management node may manage a network of base stations and wireless transmit/receive units (WTRUs) that operate using diverse radio access technologies. The network management node may communicate with other network management nodes to manage spectrum usage across their respective managed networks. The network management node may acts as a proxy for cellular-capable WTRUs that operate within the managed networks. The network management node may perform handovers of Peer-to-Peer (P2P) groups that operate within the managed networks. The WTRUs may include WTRUs that operate at Very High Frequency (VHF) or Ultra High Frequency (UHF) spectrum ("white space") frequencies.

01 Jan 2010
TL;DR: A comparative study of four different professional and clinical network types in health care is proposed to derive empirical and theoretical findings from this comparative work, along with policy recommendations for more effective network management.
Abstract: This project proposes a comparative study of four different professional and clinical network types in health care - cancer care (a clinical service), elderly care (a client group), public health (a functional activity) and the development of new genetics technologies (a basic science) - and to derive empirical and theoretical findings from this comparative work, along with policy recommendations for more effective network management. We seek to identify the characteristics (ie. structures, systems and processes) of the selected network types that are likely to lead to 'success' within their given context and inform wider network development. This proposal builds on previous work by each of the members of the research team (Ferlie & Addicott, 2004, Exworthy et al, 2003, Fitzgerald et al, 2002) on different networks in health care. This study has six objectives: 1. To identify key network characteristics (eg. organisational, managerial or membership), with a view to developing a typology of professional and clinical networks 2. To investigate the differences between more and less managed forms of networks 3. To describe the origin and evolution of different types of network structure and process over time and to examine the context, content and processes of network policies and practices 4. To explore the extent to which new ICTs are contributing to the development of more network based forms of working in health care, 5. To ascertain the factors which contribute to network performance, success factors and high impact within each network type 6. To identify promising lessons for policy and practice around networks in health care, and identify appropriate management styles and skills This qualitative study will use a comparative case study design, selecting cases from each network type across London and the midlands - with a total of eight network cases. Data collection methods thus include: (i) analysis of key local policy documents (ii) a range of semi-structured (that is, loosely guided and topic based interviews) across the various stakeholders identified; (iii) observation at key meetings. Further, we propose to include two consultancy projects, focusing on clinical and user perspectives, to complement the skills and experience of the core research team in organisational and managerial themes. The study will provide an analysis of the performance and impact of the network types within their particular context. The basis for this analysis will be formed partly on previous literature, particularly by the research team, and also through consultation with the project advisors. The six objectives identified above, in conjunction with our intended methodology, are clearly relevant to the SDO call for proposals. Networks have become an increasingly important mode of organising in health care, yet we know little about them. As identified (Ferlie & Addicott, 2004; Goodwin et al, 2004), there is very little independent and rigorous research on the recently established managed clinical networks set up in the UK. Following previous research on managed clinical networks for cancer by members of our own research team, we would seek to extend these findings through comparison with other network types.

Patent
Jung-Min Seo1, Jinkyung Hwang1, Eun-ho Choi1, Sun-Jong Kwon1, Eun-Kyoung Paik1 
05 Aug 2010
TL;DR: In this paper, the authors present a network management method of a network manager that includes monitoring events including state information of a managed element wherein the events are published by the managed element included in a network, and generating commands for an action performed according to the events.
Abstract: Provided is a network management method of a network manager. The network management method includes monitoring events including state information of a managed element wherein the events are published by the managed element included in a network, and generating commands for an action performed in the managed element according to the events, wherein the events are subscribed by the network manager managing the network.

Journal ArticleDOI
TL;DR: A cryptographic protocol for ensuring secure and timely availability of the reputation data of a peer to other peers at extremely low costs and is coupled with self-certification and cryptographic mechanisms for identity management and countering Sybil attack.
Abstract: Peer-to-peer (P2P) networks are vulnerable to peers who cheat, propagate malicious code, leech on the network, or simply do not cooperate. The traditional security techniques developed for the centralized distributed systems like client-server networks are insufficient for P2P networks by the virtue of their centralized nature. The absence of a central authority in a P2P network poses unique challenges for reputation management in the network. These challenges include identity management of the peers, secure reputation data management, Sybil attacks, and above all, availability of reputation data. In this paper, we present a cryptographic protocol for ensuring secure and timely availability of the reputation data of a peer to other peers at extremely low costs. The past behavior of the peer is encapsulated in its digital reputation, and is subsequently used to predict its future actions. As a result, a peer's reputation motivates it to cooperate and desist from malicious activities. The cryptographic protocol is coupled with self-certification and cryptographic mechanisms for identity management and countering Sybil attack. We illustrate the security and the efficiency of the system analytically and by means of simulations in a completely decentralized Gnutella-like P2P network.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This work proves that RAMP is a NP-hard problem, presents a pseudo-polynomial time solution to solve a special case of RAMP, and proposes a (1 + \epsilon)-approximation is proposed to solve the optimization version of the RAMP problem.
Abstract: Robustness and reliability are critical issues in network management. To provide resiliency against network failures, a popular protection scheme against network failures is the simultaneous routing along multiple disjoint paths. Most previous protection and restoration schemes were designed for all-ornothing protection and thus, an overkill for data traffic. In this work, we study the Reliable Adaptive Multipath Provisioning (RAMP) problem with reliability and differential delay constraints. We aim to route the connections in a manner such that link failure does not shut down the entire stream but allows a continuing flow for a significant portion of the traffic along multiple (not necessary disjoint) paths, allowing the whole network to carry sufficient traffic even when link/node failure occurs. The flexibility enabled by a multipath scheme has the tradeoff of differential delay among the diversely routed paths. This requires increased memory in the destination node in order to buffer the traffic until the data arrives on all the paths. Increased buffer size will raise the network element cost and could cause buffer overflow and data corruption. Therefore, differential delay between the multiple paths should be bounded by containing the delay of a path in a range from dmin to dmax. We first prove that RAMP is a NP-hard problem. Then we present a pseudo-polynomial time solution to solve a special case of RAMP, representing edge delays as integers. Next, a (1 + \epsilon)-approximation is proposed to solve the optimization version of the RAMP problem. We also present numerical results confirming the advantage of our scheme over the current state of art.

Journal ArticleDOI
TL;DR: In this contribution, a real-world pipeline network is studied, and an optimization model is proposed in order to address the network scheduling activities and a hierarchical approach is proposed on the basis of the integration of a mixed integer linear programming model and a set of heuristic modules.
Abstract: Pipelines have been proved to be an efficient and economic way to transport oil products. However, the determination of the scheduling of operational activities in pipeline networks is a difficult task, and efficient methods to solve such complex problem are required. In this contribution, a real-world pipeline network is studied, and an optimization model is proposed in order to address the network scheduling activities. A hierarchical approach is proposed on the basis of the integration of a mixed integer linear programming (MILP) model and a set of heuristic modules. This article exploits the MILP model, the main goal of which is to determine the exact time instants that products should be pumped into the pipelines and received in the operational areas. These time instants must satisfy the pipeline network management and operational constraints for a predefined planning period. Such operational constraints include pipeline stoppages, movement of batches through many areas/pipelines, use of preferential...

01 Jan 2010
TL;DR: Four network management modules are proposed that significantly improve accuracy and reduce time of network configuration, and can benefit many different types of networks, especially in large installations, such as service provider networks, enterprise networks, data-center networks, and power grids.
Abstract: The configuration of large-scale networks is known to be difficult and error-prone. It is a low-level device-specific task and has to deal with subtle dependencies between multiple devices across a network. Network misconfiguration is a key cause of network disruptions and may also lead to security problems in networks. The complexity of network configuration is rapidly increasing as configurations change over time; as a result, there are more human errors that greatly degrade the connectivity of networks and increase management costs. To reduce the complexity of network configuration, we propose four network management modules: Verification, Simplification, Correlation/Visualization, and Classification. The Verification module consists of a complete configuration model and an automatic policy inference system. Using the model and the policy inference system, the Verification module evaluates a variety of network-wide policies, both within a single technology and across multiple technologies (e.g., packet filtering and routing policies). The Simplification module streamlines policies in a configuration so that the configurations are easier to understand and update; in this manner, it demonstrates the potential for improving comprehensibility of network configurations. The Correlation/Visualization module visualizes high-level, intended policies by correlating low-level configurations. This module helps operators to understand distributed low-level configurations more quickly and accurately. Finally, the Classification module identifies critical elements in a network. This identification allows operators to focus their time on higher-priority problems, thus reducing the complexity of network management. We implement the four network management modules and evaluate their effectiveness with configurations from four production networks. The Verification module discovers more than a hundred errors that are confirmed and corrected by the network administrators. Some of these misconfigurations can result in loss of connectivity, access to protected networks, and financial implications by providing free transit services. The Simplification module reduces up to 70% of commands related to routing policies. We also go over a few reduction types and show that such simplification does improve the manageability of the configuration. The Correlation/Visualization module decreases operation and service deployment time from hours to minutes and increased its accuracy from 70% to nearly 100%. The Classification module identifies configurations that impact route advertisements to more than 100 peers. This module also finds routing sessions that result in more than 100 GB of loss within a few seconds if not properly protected. We believe that our systems significantly improve accuracy and reduce time of network configuration. The proposed ideas can benefit many different types of networks, especially in large installations, such as service provider networks, enterprise networks, data-center networks, and power grids.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper revisits the case for a "minimalist" approach in which a small number of simple yet generic router primitives collect flow-level data from which different traffic metrics can be estimated and demonstrates the feasibility and promise of such a minimalist approach.
Abstract: Network management applications require accurate estimates of a wide range of flow-level traffic metrics. Given the inadequacy of current packet-sampling-based solutions, several application-specific monitoring algorithms have emerged. While these provide better accuracy for the specific applications they target, they increase router complexity and require vendors to commit to hardware primitives without knowing how useful they will be to meet the needs of future applications. In this paper, we show using trace-driven evaluations that such complexity and early commitment may not be necessary. We revisit the case for a "minimalist" approach in which a small number of simple yet generic router primitives collect flow-level data from which different traffic metrics can be estimated. We demonstrate the feasibility and promise of such a minimalist approach using flow sampling and sample-and-hold as sampling primitives and configuring these in a network-wide coordinated fashion using cSamp. We show that this proposal yields better accuracy across a collection of application-level metrics than dividing the same memory resources across metric-specific algorithms. Moreover, because a minimalist approach enables late binding to what application level metrics are important, it better insulates router implementations and deployments from changing monitoring needs.