scispace - formally typeset
Search or ask a question

Showing papers on "Network planning and design published in 2006"


Journal ArticleDOI
TL;DR: This tutorial paper first reviews the basics of convexity, Lagrange duality, distributed subgradient method, Jacobi and Gauss-Seidel iterations, and implication of different time scales of variable updates, and introduces primal, dual, indirect, partial, and hierarchical decompositions, focusing on network utility maximization problem formulations.
Abstract: A systematic understanding of the decomposability structures in network utility maximization is key to both resource allocation and functionality allocation. It helps us obtain the most appropriate distributed algorithm for a given network resource allocation problem, and quantifies the comparison across architectural alternatives of modularized network design. Decomposition theory naturally provides the mathematical language to build an analytic foundation for the design of modularized and distributed control of networks. In this tutorial paper, we first review the basics of convexity, Lagrange duality, distributed subgradient method, Jacobi and Gauss-Seidel iterations, and implication of different time scales of variable updates. Then, we introduce primal, dual, indirect, partial, and hierarchical decompositions, focusing on network utility maximization problem formulations and the meanings of primal and dual decompositions in terms of network architectures. Finally, we present recent examples on: systematic search for alternative decompositions; decoupling techniques for coupled objective functions; and decoupling techniques for coupled constraint sets that are not readily decomposable

1,725 citations


Journal ArticleDOI
TL;DR: The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.
Abstract: The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.

1,720 citations


Journal ArticleDOI
TL;DR: In this paper, a mathematical modeling framework that captures many practical aspects of network design problems simultaneously but which have not received adequate attention in the literature has been proposed, including dynamic planning horizon, generic supply chain network structure, external supply of materials, inventory opportunities for goods, distribution of commodities, facility configuration, availability of capital for investments, and storage limitations.

430 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a genetic algorithm to systematically examine the underlying characteristics of the optimal bus transit route network design problem (BTRNDP) with variable transit demand, and proposed a solution framework consisting of three main components: an initial candidate route set generation procedure (ICRSGP) that generates all feasible routes incorporating practical bus transit industry guidelines; a network analysis procedure (NAP) that decides transit demand matrix, assigns transit trips, determines service frequencies, and computes performance measures; and a GAAP that combines these two parts, guides the candidate solution generation process
Abstract: This paper uses a genetic algorithm to systematically examine the underlying characteristics of the optimal bus transit route network design problem (BTRNDP) with variable transit demand. A multiobjective nonlinear mixed integer model is formulated for the BTRNDP. The proposed solution framework consists of three main components: an initial candidate route set generation procedure (ICRSGP) that generates all feasible routes incorporating practical bus transit industry guidelines; and a network analysis procedure (NAP) that decides transit demand matrix, assigns transit trips, determines service frequencies, and computes performance measures; and a genetic algorithm procedure (GAP) that combines these two parts, guides the candidate solution generation process, and selects an optimal set of routes from the huge solution space. A C++ program code is developed to implement the proposed solution methodology for the BTRNDP with variable transit demand. An example network is successfully tested as a pilot study. Sensitivity analyses are performed. Comprehensive characteristics underlying the BTRNDP, including the effect of route set size, the effect of demand aggregation, and the redesign of the existing transit network issue, are also presented.

264 citations


Journal ArticleDOI
TL;DR: In this paper, a new computer model called Genetic Algorithm Pipe Network Optimization Model (GENOME) has been developed with the aim of optimizing the design of new looped irrigation water distribution networks.
Abstract: [1] A new computer model called Genetic Algorithm Pipe Network Optimization Model (GENOME) has been developed with the aim of optimizing the design of new looped irrigation water distribution networks. The model is based on a genetic algorithm method, although relevant modifications and improvements have been implemented to adapt the model to this specific problem. It makes use of the robust network solver EPANET. The model has been tested and validated by applying it to the least cost optimization of several benchmark networks reported in the literature. The results obtained with GENOME have been compared with those found in previous works, obtaining the same results as the best published in the literature to date. Once the model was validated, the optimization of a real complex irrigation network has been carried out to evaluate the potential of the genetic algorithm for the optimal design of large-scale networks. Although satisfactory results have been obtained, some adjustments would be desirable to improve the performance of genetic algorithms when the complexity of the network requires it.

209 citations


Journal ArticleDOI
TL;DR: An analytical model to study the performance of wireless local area networks supporting asymmetric nonpersistent traffic using the IEEE 802.11 distributed coordination function mode for medium access control is developed and the voice capacity of an infrastructure-based WLAN, in terms of the maximum number of voice connections that can be supported with satisfactory user-perceived quality is obtained.
Abstract: An analytical model to study the performance of wireless local area networks (WLANs) supporting asymmetric nonpersistent traffic using the IEEE 802.11 distributed coordination function mode for medium access control (MAC) is developed. Given the parameters of the MAC protocol and voice codecs, the voice capacity of an infrastructure-based WLAN, in terms of the maximum number of voice connections that can be supported with satisfactory user-perceived quality, is obtained. In addition, voice capacity analysis reveals how the overheads from different layers, codec rate, and voice packetization interval affect voice traffic performance in WLANs, which provides an important guideline for network planning and management. The analytical results can be used for effective call admission control to guarantee the quality of voice connections. Extensive simulations have been performed to validate the analytical results

207 citations


Proceedings ArticleDOI
29 Sep 2006
TL;DR: A detailed performance study of a set of long-distance 802.11b links at various layers of the network stack finds that the error rate as a function of the received signal strength behaves close to theory.
Abstract: The use of 802.11 long-distance links is a cost-effective means of providing wireless connectivity to rural areas. Although deployments in this setting are increasing, a systematic study of the performance of 802.11 in these settings is lacking. The contributions of this paper are two-fold: (a)we present a detailed performance study of a set of long-distance 802.11b links at various layers of the network stack, and (b)we document the various non-obvious experiences during our study.Our study includes eight long-distance links, ranging from 1km to 37km in length. Unlike prior studies of outdoor 802.11 links, we find that the error rate as a function of the received signal strength behaves close to theory. Time correlation of any packet errors is negligible across a range of time-scales. We have observed at least one of the link to be robust to rain and fog. But any interference on the long-distance links can be detrimental to performance. Apart from this however, such long-distance links can be planned to work well with predictable performance. During our measurements, we have observed a few hardware/driver quirks as well as system bottlenecks apart from the wireless link itself. We believe that our measurements and the documentation of our experience will help future network planning as well as protocol design for these networks.

196 citations


Journal ArticleDOI
01 Aug 2006
TL;DR: The improvement in performance that can be effected by removing edges can be arbitrarily large in large networks and it is shown that Braess's Paradox--even in its worst-possible manifestations--is impossible to detect efficiently.
Abstract: We consider a directed network in which every edge possesses a latency function that specifies the time needed to traverse the edge given its congestion. Selfish, noncooperative agents constitute the network traffic and wish to travel from a source vertex s to a destination t as quickly as possible. Since the route chosen by one network user affects the congestion experienced by others, we model the problem as a noncooperative game. Assuming that each agent controls only a negligible portion of the overall traffic, Nash equilibria in this noncooperative game correspond to s-t flows in which all flow paths have equal latency.A natural measure for the performance of a network used by selfish agents is the common latency experienced by users in a Nash equilibrium. Braess's Paradox is the counterintuitive but well-known fact that removing edges from a network can improve its performance. Braess's Paradox motivates the following network design problem: given a network, which edges should be removed to obtain the best flow at Nash equilibrium? Equivalently, given a network of edges that can be built, which subnetwork will exhibit the best performance when used selfishly?We give optimal inapproximability results and approximation algorithms for this network design problem. For example, we prove that there is no approximation algorithm for this problem with approximation ratio less than n/2, where n is the number of network vertices, unless P = NP. We further show that this hardness result is the best possible, by exhibiting an (n/2)-approximation algorithm. We also prove tight inapproximability results when additional structure, such as linearity, is imposed on the network latency functions.Moreover, we prove that an optimal approximation algorithm for these problems is the trivial algorithm: given a network of candidate edges, build the entire network. As a consequence, we show that Braess's Paradox--even in its worst-possible manifestations--is impossible to detect efficiently.En route to these results, we give a fundamental generalization of Braess's Paradox: the improvement in performance that can be effected by removing edges can be arbitrarily large in large networks. Even though Braess's Paradox has enjoyed 35 years as a textbook example, our result is the first to extend its severity beyond that in Braess's original four-node network.

187 citations


Journal ArticleDOI
TL;DR: Criteria for network design that emphasize the utility of the network for prediction (kriging) of unobserved responses assuming known spatial covariance parameters are contrasted with criteria that emphasizes the estimation of the covariances parameters themselves.
Abstract: Inferences for spatial data are affected substantially by the spatial configuration of the network of sites where measurements are taken. In this article, criteria for network design that emphasize the utility of the network for prediction (kriging) of unobserved responses assuming known spatial covariance parameters are contrasted with criteria that emphasize the estimation of the covariance parameters themselves. It is shown, via a series of related examples, that these two main design objectives are largely antithetical and thus lead to quite different "optimal" designs. Furthermore, a hybrid design criterion that accounts for the effect that the sampling variation of spatial covariance parameter estimates has on prediction is described and illustrated. Situations in which the hybrid optimal design resembles designs that are optimal with respect to each of the other two criteria are identified. An application to the optimal augmentation of an acid deposition monitoring network in the eastern US is presented.

187 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a simulated annealing algorithm to solve the optimal bus transit route network design problem (BTRNDP) at the distribution node level, where a multi-objective nonlinear mixed integer model is formulated for the BTR NDP.
Abstract: This paper uses a simulated annealing algorithm to solve the optimal bus transit route network design problem (BTRNDP) at the distribution node level. A multiobjective nonlinear mixed integer model is formulated for the BTRNDP. The proposed solution framework consists of three main components: An initial candidate route set generation procedure that generates all feasible routes incorporating practical bus transit industry guidelines; and a network analysis procedure that assigns transit trips, determines service frequencies, and computes performance measures; and a simulated annealing procedure that combines these two parts, guides the candidate solution generation process and selects an optimal set of routes from the huge solution space. Three experimental networks are successfully tested as a pilot study. A genetic algorithm is also used as a benchmark to measure the quality of the simulated annealing algorithm. The presented numerical results clearly indicate that the simulated annealing outperforms the genetic algorithm in most cases using the example networks. Sensitivity analyses are performed and related characteristics and tradeoffs underlying the BTRNDP are also discussed.

173 citations


Proceedings ArticleDOI
11 Sep 2006
TL;DR: A comprehensive network on-chip traffic model for homogeneous NoCs is proposed that captures the spatio-temporal characteristics of NoC traffic accurately with less than 5% error, and can be used to generate synthetic traffic traces that can drive NoC design-space exploration.
Abstract: Network traffic modeling is a critical first step towards understanding and unraveling network power/performancerelated issues. Extensive prior research in the area of classic networks such as the Internet, Ethernet, and wireless LANs transporting TCP/IP, HTTP, and FTP traffic among others, has demonstrated how traffic models and model-based synthetic traffic generators can facilitate understanding of traffic characteristics and drive early-stage simulation to explore a large network design space. Though on-chip networks (a.k.a networks-on-chip (NoCs)) are becoming the de-facto scalable communication fabric in many-core systems-on-a-chip (SoCs) and chip multiprocessors (CMPs), no on-chip network traffic model that captures both spatial and temporal variations of traffic has been demonstrated yet. As available on-chip resources increase with technology scaling, enabling a myriad of new network architectures, NoCs need to be designed from the application’s perspective. In this paper we propose such an empirically-derived network on-chip traffic model for homogeneous NoCs. Our comprehensive model is based on three statistical parameters described with a 3-tuple, and captures the spatio-temporal characteristics of NoC traffic accurately with less than 5% error when compared to actual NoC application traces gathered from fullsystem simulations of three different chip platforms. We illustrate two potential uses of our traffic model: how it allows us to characterize and gain insights on NoC traffic patterns, and how it can be used to generate synthetic traffic traces that can drive NoC design-space exploration.

Journal ArticleDOI
TL;DR: The tolerance-based DUO principle is introduced and its solution existence and uniqueness is discussed, a solution heuristic is developed, and its properties are demonstrated through numerical examples.
Abstract: Dynamic Traffic Assignment (DTA) is long recognized as a key component for network planning and transport policy evaluations as well as for real-time traffic operation and management. How traffic is encapsulated in a DTA model has important implications on the accuracy and fidelity of the model results. This study compares and contrasts the properties of DTA modelled with point queues versus those with physical queues, and discusses their implications. One important finding is that with the more accurate physical queue paradigm, under certain congested conditions, solutions for the commonly adopted dynamic user optimal (DUO) route choice principle just do not exist. To provide some initial thinking to accommodate this finding, this study introduces the tolerance-based DUO principle. This paper also discusses its solution existence and uniqueness, develops a solution heuristic, and demonstrates its properties through numerical examples. Finally, we conclude by presenting some prospective future research di...

Proceedings Article
01 Jan 2006
TL;DR: The unsupervised clustering technique has an accuracy up to 91% and outperform the supervised technique by up to 9% and has the potential to become an excellent tool for exploring Internet traffic.
Abstract: We apply an unsupervised machine learning ap- proach for Internet traffic identification and compare the results with that of a previously applied supervised machine learning approach. Our unsupervised approach uses an Expectation Max- imization (EM) based clustering algorithm and the supervised approach uses the NaBayes classifier. We find the unsu- pervised clustering technique has an accuracy up to 91% and outperform the supervised technique by up to 9%. We also find that the unsupervised technique can be used to discover traffic from previously unknown applications and has the potential to become an excellent tool for exploring Internet traffic. I. INTRODUCTION Accurate classification of Internet traffic is important in many areas such as network design, network management, and network security. One key challenge in this area is to adapt to the dynamic nature of Internet traffic. Increasingly, new applications are being deployed on the Internet; some new applications such as peer-to-peer (P2P) file sharing and online gaming are becoming popular. With the evolution of Internet traffic, both in terms of number and type of applications, however, traditional classification techniques such as those based on well-known port numbers or packet payload analysis are either no longer effective for all types of network traffic or are otherwise unable to deploy because of privacy or security concerns for the data. A promising approach that has recently received some attention is traffic classification using machine learning tech- niques (1)-(4). These approaches assume that the applications typically send data in some sort of pattern; these patterns can be used as a means of identification which would allow the connections to be classified by traffic class. To find these patterns, flow statistics (such as mean packet size, flow length, and total number of packets) available using only TCP/IP headers are needed. This allows the classification technique to avoid the use of port numbers and packet payload information in the classification process. In this paper, we apply an unsupervised learning technique (EM clustering) for the Internet traffic classification problem and compare the results with that of a previously applied supervised machine learning approach. The unsupervised clus- tering approach uses an Expectation Maximization (EM) algo- rithm (5) that is different in that it classifies unlabeled training data into groups called "clusters" based on similarity. The NaBayes classifier has been previously shown to have high accuracy for Internet traffic classification (2). In parallel work, Zander et al. focus on using the EM clustering approach to build the classification model (4). We complement their work by using the EM clustering approach to build a classifier and show that this classifier outperforms the Na¨ Bayes classifier in terms of classification accuracy. We also analyze the time required to build the classification models for both approaches as a function of the size of the training data set. We also explore the clusters found by the EM approach and find that the majority of the connections are in a subset of the total clusters. The rest of this paper is organized as follows. Section II presents related work. In Section III, the background on the algorithms used in the Na¨ive Bayes and EM clustering approaches are covered. In Section IV, we introduce the data sets used in our work and present our experimental results. Section V discusses the advantages and disadvantages of the approaches. Section VI presents our conclusions and describes future work avenues.

Journal ArticleDOI
TL;DR: A framework is presented for a stochastic network model with Poisson-distributed demand and uncertain route choice and the analytical derivative of the TTR is derived with the sensitivity analysis of the equilibrated path choice probability to solve the RNDP.
Abstract: In the reliable network design problem (RNDP) the main sources of uncertainty are variable demand and route choice The objective is to maximize network total travel time reliability (TTR), which is defined as the probability that the network total travel time will be less than a threshold A framework is presented for a stochastic network model with Poisson-distributed demand and uncertain route choice The travelers are assumed to choose their routes to minimize their perceived expected travel cost following the probit stochastic user equilibrium condition An analytical method is presented for approximation of the first and second moments of the total travel time These moments are then fitted with a log-normal distribution Then the design problem is tackled in which the analytical derivative of the TTR is derived with the sensitivity analysis of the equilibrated path choice probability This derivative is then supplied to a gradient-based optimization algorithm to solve the RNDP The algorithm is tes

Journal ArticleDOI
TL;DR: The three classes of hybrid optical network architectures are classified based on the degree of interaction and integration of the network technologies and their main representatives regarding key characteristics, performance benefits, and realization complexity.
Abstract: In recent years hybrid optical network architectures, which employ two or more network technologies simultaneously, were proposed. They aim at improving the overall network design by combining the advantages of different technologies while avoiding their disadvantages. In order to structure this developing research field, we classify such hybrid architectures based on the degree of interaction and integration of the network technologies. Also, we discuss the three classes and their main representatives regarding key characteristics, performance benefits, and realization complexity. Finally, we highlight two hybrid architectures and show their key benefits compared to the respective non-hybrid architectures through a dimensioning case study

Journal ArticleDOI
TL;DR: Both network layout and link capacity (link layout and traffic lights) are optimised and demand is considered elastic with respect to mode choice; both morning and afternoon peak periods are taken into account.
Abstract: In this paper, urban network design is analysed through a heuristic multi-criteria technique based on genetic algorithms. Both network layout and link capacity (link layout and traffic lights) are optimised. Different optimisation criteria are included for users, non-users and public system managers. Demand is considered elastic with respect to mode choice; both morning and afternoon peak periods are taken into account. In addition, choice of parking location is simulated. The procedure is applied to a test and to a real transportation system.

Journal ArticleDOI
TL;DR: This paper focuses on evaluating performances obtained by all the different algorithms proposed for the topology design stage, compared by applications to real networks, and some conclusions are drawn about their efficiency.

Journal ArticleDOI
TL;DR: Empirical analysis results demonstrate that the proposed approach can outperform the SGA in partner selection and production/distribution planning for network design.
Abstract: In this paper, a novel multi-phase mathematical approach is presented for the design of a complex supply chain network. From the point of network design, customer demands, and for maximum overall utility, the important issues are to find suitable and quality companies, and to decide upon an appropriate production/distribution strategy. The proposed approach is based on the genetic algorithm (GA), the analytical hierarchy process (AHP), and the multi-attribute utility theory (MAUT) to satisfy simultaneously the preferences of the suppliers and the customers at each level in the network. A case study with a good quality solution is provided to confirm the efficiency and effectiveness of the proposed approach. Finally, to demonstrate the performance of the proposed approach, a comparative numerical experiment is performed by using the proposed approach and the common single-phase genetic algorithm (SGA). Empirical analysis results demonstrate that the proposed approach can outperform the SGA in partner selection and production/distribution planning for network design.

Journal ArticleDOI
11 Aug 2006
TL;DR: S Swing is the first to reproduce burstiness in traffic across a range of timescales using a model applicable to a variety of network settings and an initial sensitivity analysis reveals the importance of capturing and recreating user, application, and network characteristics to accurately reproduce such burstiness.
Abstract: This paper presents Swing, a closed-loop, network-responsive traffic generator that accurately captures the packet interactions of a range of applications using a simple structural model. Starting from observed traffic at a single point in the network, Swing automatically extracts distributions for user, application, and network behavior. It then generates live traffic corresponding to the underlying models in a network emulation environment running commodity network protocol stacks. We find that the generated traces are statistically similar to the original traces. Further, to the best of our knowledge, we are the first to reproduce burstiness in traffic across a range of timescales using a model applicable to a variety of network settings. An initial sensitivity analysis reveals the importance of capturing and recreating user, application, and network characteristics to accurately reproduce such burstiness. Finally, we explore Swing's ability to vary user characteristics, application properties, and wide-area network conditions to project traffic characteristics into alternate scenarios.

Journal ArticleDOI
TL;DR: In this paper, a new approach for network planning in unbundled power systems is presented, which takes into account the desires of demand customers, power producers, system operator, network owner(s), and regulator in network planning.
Abstract: In this paper, a new approach for network planning in unbundled power systems is presented. The approach takes into account the desires of demand customers, power producers, system operator, network owner(s), and regulator in network planning. Competition, reliability, flexibility of operation, transmission expansion cost, and environmental impacts are used as planning criteria. In order to consider the importance degrees of stakeholders and planning criteria in network planning, first importance degrees of stakeholders and planning criteria are determined by a presented new method. Then, importance degrees of stakeholders and planning criteria are aggregated with appropriateness degrees of expansion plans to compute a fuzzy index for measuring the goodness of expansion plans. The final plan is selected using the presented fuzzy risk assessment method. The approach is applied to an eight-bus test system

Journal ArticleDOI
TL;DR: From this study, it was concluded that the proposed methodology could be a useful decision support tool for the optimized design of water quality monitoring networks.

Proceedings ArticleDOI
22 Jan 2006
TL;DR: A framework to model oblivious network design problems, and give algorithms with poly-logarithmic competitive ratio for problems in this framework (and hence for this problem) is developed.
Abstract: Consider the following network design problem: given a network G = (V, E), source-sink pairs {si, ti} arrive and desire to send a unit of flow between themselves. The cost of the routing is this: if edge e carries a total of fe flow (from all the terminal pairs), the cost is given by Σ el(fe), where l is some concave cost function; the goal is to minimize the total cost incurred. However, we want the routing to be oblivious: when terminal pair {si, ti} makes its routing decisions, it does not know the current flow on the edges of the network, nor the identity of the other pairs in the system. Moreover, it does not even know the identity of the function l, merely knowing that l is a concave function of the total flow on the edge. How should it (obliviously) route its one unit of flow? Can we get competitive algorithms for this problem?In this paper, we develop a framework to model oblivious network design problems (of which the above problem is a special case), and give algorithms with poly-logarithmic competitive ratio for problems in this framework (and hence for this problem). Abstractly, given a problem like the one above, the solution is a multicommodity flow producing a "load" on each edge of Le = l(f1(e),f2(e), ..., fk(e)), and the total cost is given by an "aggregation function" agg (Le1,...,Lem) of the loads of all edges. Our goal is to develop oblivious algorithms that approximately minimize the total cost of the routing, knowing the aggregation function agg, but merely knowing that l lies in some class C, and having no other information about the current state of the network. Hence we want algorithms that are simultaneously "function-oblivious" as well as "traffic-oblivious".The aggregation functions we consider are the max and σ objective functions, which correspond to the well-known measures of congestion and total cost of a network; in this paper, we prove the following:• If the aggregation function is Σ, we give an oblivious algorithm with O(log2n) competitive ratio whenever the load function l is in the class of monotone sub-additive functions. (Recall that our algorithm is also "function-oblivious"; it works whenever each edge has a load function l in the class.)• For the case when the aggregation function is max, we give an oblivious algorithm with O(log2n log log n) competitive ratio, when the load function l is a norm; we also show that such a competitive ratio is not possible for general sub-additive functions.These are the first such general results about oblivious algorithms for network design problems, and we hope the ideas and techniques will lead to more and improved results in this area.

Journal ArticleDOI
TL;DR: It is proposed that problem-based learning is an ideal pedagogical tool for the teaching of computer network design.
Abstract: This paper addresses the challenge of developing techniques for the effective teaching of computer network design. It reports on the experience of using the technique of problem-based learning as a key pedagogical method for teaching practical network design within the context of a Master's program module in data telecommunications and networks at the University of Salford, Salford, Greater Manchester, U.K. A two-threaded approach was adopted that comprised a problem-based learning thread and a conventional lecture thread. The problem-based learning thread within the module comprised sessions designed to place the students in the position of network design consultants who are introduced to scenarios that have a high degree of realism in which a client has specific business requirements that can be met through the adoption of a network solution. In this way, the problem-based learning thread allows the students to develop their design skills, while the lecture thread uses traditional teaching methods to allow students to develop their understanding of key network components and architectures. A formal evaluation of this approach has been carried out and demonstrated a very effective and realistic learning experience for the students. Therefore, the authors propose that problem-based learning is an ideal pedagogical tool for the teaching of computer network design.

Journal ArticleDOI
TL;DR: This paper estimates the variations of network capacity under different routing strategies for three different topologies and finds that the capacity depends on the underlying network structure and the capacity increases as the network becomes more homogeneous.
Abstract: The capacity of maximum end-to-end traffic flow the network is able to handle without overloading is an important index for network performance in real communication systems. In this paper, we estimate the variations of network capacity under different routing strategies for three different topologies. Simulation results reveal that the capacity depends on the underlying network structure and the capacity increases as the network becomes more homogeneous. It is also observed that the network capacity is greatly enhanced when the new traffic awareness routing strategy is adopted in each network structure.

Journal ArticleDOI
01 Nov 2006
TL;DR: A pattern-based approach to knowledge flow design is proposed for more effective and efficient planning, which starts from basic concepts, uses a knowledge spiral to model knowledge flow patterns and operations, and lays down principles for knowledge flow network composition and evolution.
Abstract: Organizations and communities are held together by knowledge flow networks whether people are aware of them or not. To plan such a network is to describe a formal and optimal flow of knowledge as the basis for effective teamwork. The difficulty is that the result of such planning depends greatly on the planners' experience. This paper proposes a pattern-based approach to knowledge flow design for more effective and efficient planning. The approach starts from basic concepts, uses a knowledge spiral to model knowledge flow patterns and operations, and lays down principles for knowledge flow network composition and evolution. Tools for planning, simulation and management of resource-mediated knowledge flow have been developed and experimentally applied to the work of research teams. The planning tool can help users to define, modify and verify a knowledge flow network and to integrate its components. The simulation tool enables users to study knowledge flow in a visualized network and to develop strategies for adapting networks to changing conditions. The basic idea is to adapt and control logistical processes for knowledge flow within teams.

Journal ArticleDOI
TL;DR: In this article, the authors propose a time-dependent network design problem, where the goal is to find the optimal infrastructure improvement timetable, the associated financial arrangement, and tolling scheme over the planning horizon.
Abstract: Existing transportation network design studies focus on optimizing the network for a certain future time but without explicitly defining the time dimension within the formulation. This study extends the consideration by formulating the time-dependent network design problem. With this extension, one can plan for the optimal infrastructure improvement timetable, the associated financial arrangement, and tolling scheme over the planning horizon. In addition, this extension enables the pursuit of important considerations that are otherwise difficult, if at all possible, with the traditional timeless approach. Through the time-dependent framework, this study examines the issue of intergeneration equity according to the user and social perspectives. Basically, should the present generation build the full-blown network, or should users at the time pay for future incremental upgrades? Using a gap function to measure the degree of intergeneration equity achieved, this study illustrates that there are tradeoffs between societal and individual perspectives. Nevertheless, this study suggests ways whereby the planner can trade the level of equity to be attained with the overall network performance. In this way, some gradual measures can be introduced to the network design to compromise between these two perspectives.

Journal ArticleDOI
TL;DR: In this paper, Monte Carlo simple genetic algorithm (MCSGA) and noisy GA (NGA) were compared for cost-effective sampling network design in the presence of uncertainties in the hydraulic conductivity (K) field.

Journal ArticleDOI
TL;DR: A systems analysis of a prototypic quorum sensing network whose operation allows bacterial populations to activate certain patterns of gene expression cooperatively and demonstrates the importance of the dimerization of the transcription factor and the presence of the auxiliary positive feedback loop on the switch-like behavior of the network and the stability of its "on" and "off" states under the influence of molecular noise.
Abstract: Understanding the relationship between the structural organization of intracellular decision networks and the observable phenotypes they control is one of the exigent problems of modern systems biology. Here we perform a systems analysis of a prototypic quorum sensing network whose operation allows bacterial populations to activate certain patterns of gene expression cooperatively. We apply structural perturbations to the model and analyze the resulting changes in the network behavior with the aim to identify the contribution of individual network elements to the functional fitness of the whole network. Specifically, we demonstrate the importance of the dimerization of the transcription factor and the presence of the auxiliary positive feedback loop on the switch-like behavior of the network and the stability of its “on” and “off” states under the influence of molecular noise.

Journal ArticleDOI
TL;DR: This paper presents the development of a methodology for identifying the critical sampling locations within a watershed by designating the critical stream locations that should ideally be sampled and utilizes a geographic information system, hydrologic simulation model, and fuzzy logic.
Abstract: The principal instrument to temporally and spatially manage water resources is a water quality monitoring network. However, to date in most cases, there is a clear absence of a concise strategy or methodology for designing monitoring networks, especially when deciding upon the placement of sampling stations. Since water quality monitoring networks can be quite costly, it is very important to properly design the monitoring network so that maximum information extraction can be accomplished, which in turn is vital when informing decision-makers. This paper presents the development of a methodology for identifying the critical sampling locations within a watershed. Hence, it embodies the spatial component in the design of a water quality monitoring network by designating the critical stream locations that should ideally be sampled. For illustration purposes, the methodology focuses on a single contaminant, namely total phosphorus, and is applicable to small, upland, predominantly agricultural-forested watersheds. It takes a number of hydrologic, topographic, soils, vegetative, and land use factors into account. In addition, it includes an economic as well as logistical component in order to approximate the number of sampling points required for a given budget and to only consider the logistically accessible stream reaches in the analysis, respectively. The methodology utilizes a geographic information system (GIS), hydrologic simulation model, and fuzzy logic.

Journal ArticleDOI
TL;DR: Numerical results show that the SMOGA procedure is robust in generating ‘good’ non-dominated solutions with respect to a number of parameters used in the GA, and performs better than the weighted-sum method in terms of the quality of non- dominated solutions.
Abstract: Solving optimization problems with multiple objectives under uncertainty is generally a very difficult task. Evolutionary algorithms, particularly genetic algorithms, have shown to be effective in solving this type of complex problems. In this paper, we develop a simulation-based multi-objective genetic algorithm (SMOGA) procedure to solve the build-operate-transfer (BOT) network design problem with multiple objectives under demand uncertainty. The SMOGA procedure integrates stochastic simulation, a traffic assignment algorithm, a distance-based method, and a genetic algorithm (GA) to solve a multi-objective BOT network design problem formulated as a stochastic bi-level mathematical program. To demonstrate the feasibility of SMOGA procedure, we solve two mean-variance models for determining the optimal toll and capacity in a BOT roadway project subject to demand uncertainty. Using the inter-city expressway in the Pearl River Delta Region of South China as a case study, numerical results show that the SMOGA procedure is robust in generating ‘good’ non-dominated solutions with respect to a number of parameters used in the GA, and performs better than the weighted-sum method in terms of the quality of non-dominated solutions.