scispace - formally typeset
Search or ask a question

Showing papers on "Complex network published in 2000"


Journal ArticleDOI
27 Jul 2000-Nature
TL;DR: It is found that scale-free networks, which include the World-Wide Web, the Internet, social networks and cells, display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates.
Abstract: Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3,4,5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

7,697 citations


Journal ArticleDOI
05 Oct 2000-Nature
TL;DR: In this paper, the authors present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life, and show that despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems.
Abstract: In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.

4,497 citations


Journal ArticleDOI
TL;DR: Evidence of the occurrence of three classes of small-world networks, characterized by a vertex connectivity distribution that decays as a power law law, and the nature of such constraints may be the controlling factor for the emergence of different classes of networks are presented.
Abstract: We study the statistical properties of a variety of diverse real-world networks. We present evidence of the occurrence of three classes of small-world networks: (a) scale-free networks, characterized by a vertex connectivity distribution that decays as a power law; (b) broad-scale networks, characterized by a connectivity distribution that has a power law regime followed by a sharp cutoff; and (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail. Moreover, we note for the classes of broad-scale and single-scale networks that there are constraints limiting the addition of new links. Our results suggest that the nature of such constraints may be the controlling factor for the emergence of different classes of networks.

3,074 citations


Journal ArticleDOI
TL;DR: This paper studies percolation on graphs with completely general degree distribution, giving exact solutions for a variety of cases, including site percolators, bond percolations, and models in which occupation probabilities depend on vertex degree.
Abstract: Recent work on the Internet, social networks, and the power grid has addressed the resilience of these networks to either random or targeted deletion of network nodes or links. Such deletions include, for example, the failure of Internet routers or power transmission lines. Percolation models on random graphs provide a simple representation of this process but have typically been limited to graphs with Poisson degree distribution at their vertices. Such graphs are quite unlike real-world networks, which often possess power-law or other highly skewed degree distributions. In this paper we study percolation on graphs with completely general degree distribution, giving exact solutions for a variety of cases, including site percolation, bond percolation, and models in which occupation probabilities depend on vertex degree. We discuss the application of our theory to the understanding of network resilience.

2,298 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that depending on the frequency of local events, two topologically different networks can emerge, the connectivity distribution following either a generalized power law or an exponential.
Abstract: Networks grow and evolve by local events, such as the addition of new nodes and links, or rewiring of links from one node to another. We show that depending on the frequency of these processes two topologically different networks can emerge, the connectivity distribution following either a generalized power law or an exponential. We propose a continuum theory that predicts these two regimes as well as the scaling function and the exponents, in good agreement with numerical results. Finally, we use the obtained predictions to fit the connectivity distribution of the network describing the professional links between movie actors.

1,150 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that error tolerance is not shared by all redundant systems, but it is displayed only by a class of inhomogeneously wired networks, called scale-free networks.
Abstract: Many complex systems, such as communication networks, display a surprising degree of robustness: while key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. In this paper we demonstrate that error tolerance is not shared by all redundant systems, but it is displayed only by a class of inhomogeneously wired networks, called scale-free networks. We find that scale-free networks, describing a number of systems, such as the World Wide Web, Internet, social networks or a cell, display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected by even unrealistically high failure rates. However, error tolerance comes at a high price: these networks are extremely vulnerable to attacks, i.e. to the selection and removal of a few nodes that play the most important role in assuring the network's connectivity.

483 citations


Journal ArticleDOI
TL;DR: The neural system of the nematode C. elegans, the collaboration graph of film actors, and the oldest US subway system, can now be studied also as metrical networks and are shown to be small-worlds.
Abstract: The small-world phenomenon, popularly known as six degrees of separation, has been mathematically formalized by Watts and Strogatz in a study of the topological properties of a network. Small-world networks are defined in terms of two quantities: they have a high clustering coefficient C like regular lattices and a short characteristic path length L typical of random networks. Physical distances are of fundamental importance in applications to real cases; nevertheless, this basic ingredient is missing in the original formulation. Here, we introduce a new concept, the connectivity length D, that gives harmony to the whole theory. D can be evaluated on a global and on a local scale and plays in turn the role of L and 1/C. Moreover, it can be computed for any metrical network and not only for the topological cases. D has a precise meaning in terms of information propagation and describes in a unified way, both the structural and the dynamical aspects of a network: small-worlds are defined by a small global and local D, i.e., by a high efficiency in propagating information both on a local and global scale. The neural system of the nematode C. elegans, the collaboration graph of film actors, and the oldest US subway system, can now be studied also as metrical networks and are shown to be small-worlds.

300 citations


Journal ArticleDOI
TL;DR: This paper begins the study of small-world networks as communication networks using graph-theoretic methods to obtain exact results and constructs networks with strong local clustering and small diameter (instead of average distance).

145 citations


Journal ArticleDOI
TL;DR: Smart Packets improves the management of large complex networks by moving management decision points closer to the node being managed, targeting specific aspects of the node for information rather than exhaustive collection via polling, and abstracting the management concepts to language constructs, allowing nimble network control.
Abstract: This article introduces Smart Packets and describes the smart Packets architecture, the packet formats, the language and its design goals, and security considerations. Smart Packets is an Active Networks project focusing on applying active networks technology to network management and monitoring. Messages in active networks are programs that are executed at nodes on the path to one or more target hosts. Smart Packets programs are written in a tightly encoded, safe language specifically designed to support network management and avoid dangerous constructs and accesses. Smart Packets improves the management of large complex networks by (1) moving management decision points closer to the node being managed, (2) targeting specific aspects of the node for information rather than exhaustive collection via polling, and (3) abstracting the management concepts to language constructs, allowing nimble network control.

139 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the first systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life and show that, despite significant variances in their individual constituents and pathways, these metabolic networks display the same topologic scaling properties demonstrating striking similarities to the inherent organization of complex non-biological systems.
Abstract: In a cell or microorganism the processes that generate mass, energy, information transfer, and cell fate specification are seamlessly integrated through a complex network of various cellular constituents and reactions. However, despite the key role these networks play in sustaining various cellular functions, their large-scale structure is essentially unknown. Here we present the first systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variances in their individual constituents and pathways, these metabolic networks display the same topologic scaling properties demonstrating striking similarities to the inherent organization of complex non-biological systems. This suggests that the metabolic organization is not only identical for all living organisms, but complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.

138 citations


Journal ArticleDOI
TL;DR: In this paper, a model of decentralized growth and development for artificial neural networks (ANNs), inspired by developmental biology and the physiology of nervous systems, is presented, where each individual artificial neuron is an autonomous unit whose behavior is determined only by the genetic information it harbors and local concentrations of substrates.
Abstract: We present a model of decentralized growth and development for artificial neural networks (ANNs), inspired by developmental biology and the physiology of nervous systems. In this model, each individual artificial neuron is an autonomous unit whose behavior is determined only by the genetic information it harbors and local concentrations of substrates. The chemicals and substrates, in turn, are modeled by a simple artificial chemistry. While the system is designed to allow for the evolution of complex networks, we demonstrate the power of the artificial chemistry by analyzing engineered (handwritten) genomes that lead to the growth of simple networks with behaviors known from physiology. To evolve more complex structures, a Java-based, platform-independent, asynchronous, distributed genetic algorithm (GA) has been implemented that allows users to participate in evolutionary experiments via the World Wide Web.

Proceedings ArticleDOI
15 Feb 2000
TL;DR: This paper proposes the design of a scalable, high performance active router that is used as a vehicle for studying the key design issues that must be resolved to allow active networking to become a mainstream technology.
Abstract: Active networking is a general approach to incorporating general-purpose computational capabilities within the communications infrastructure of data networks. This paper proposes the design of a scalable, high performance active router. This is used as a vehicle for studying the key design issues that must be resolved to allow active networking to become a mainstream technology.

01 Jan 2000
TL;DR: In this paper, the authors discuss why companies should consider collaboration with customers and suppliers for innovation and identify a set of activities that appear to be critical to managing collaborative innovation in complex networks, thus complicating the task of trying to manage them effectively.
Abstract: This paper discusses why companies should consider collaboration with customers and suppliers for innovation and identifies a set of activities that appear to be critical to managing collaborative innovation. It conceptualises how these activities may be affected when performed in complex networks, thus complicating the task of trying to manage them effectively. The paper reports on findings from a small set of exploratory interviews and discusses some possible explanations for apparent cross-case differences. A note on methodological and theoretical lessons completes the paper. Introduction Concepts such as ‘supply chain management’, ‘partnerships’, and ‘networking’ are becoming established as best practice across a variety of sectors. Whereas these primarily concern how companies should manage their operations in some form of partnership along the supply chain they also have profound effects on the way in which companies innovate; concepts such as ‘early supplier involvement in product development’ and ‘innovation networks’ are the latest buzz words. The majority of these concepts, however, adopt a rather isolated view of partnerships, largely ignoring the embeddedness of such dyadic relationships in complex networks. As discussed by IMP (e.g. Håkansson 1987; Håkansson and Snehota 1995) any relationship, and thus innovation performed within relationships, is heavily dependent on developments in a large range of both direct and indirect relationships. On the one hand this dependency means that innovation performed in individual partnerships is constrained by what happens elsewhere in the network. On the other hand the very same network may also permit companies to gain access to and deploy technologies located in the network. Whereas IMP have traditionally described why this may be the case, little attention has been paid to developing proactive ways of how to better cope with the problems of networks whilst at the same time explore and exploit the pool of technologies potentially available in the network. Collaborative Innovation Innovation is increasingly recognised as being the result of the combination of different knowledge and expertise that exist within different organisations i.e. relationships may have interactive and complementary effects on technological innovation. Hence, it is not surprising that there has been a strong upsurge of various forms of inter-organisational collaborative ventures for innovation (Freeman 1991; Hagedoorn 1995; Hagedoorn and Schakenraad 1990). Our primary focus is on vertical collaboration that takes place within buyer-supplier relationships, but which in reality is affected by a myriad of what can only simplistically be conceived as ‘vertical’ and ‘horizontal’ relationships. The following section briefly examines the literature of customer and supplier collaboration, explaining the reasons why companies should collaborate and identifying the key activities of managing these two forms of collaboration. Manufacturer-Customer Collaboration Marketing focuses on the needs and demands of the customer. Thus analysis of customer requirements has been the natural starting point of new product development in marketing. Such analysis traditionally includes initial identification of customer needs, evaluation of product potential, and eventual testing of products (using for example Quality Function Deployment techniques). However, when companies seek to develop novel or complex products and technologies and to market these into markets that are either not well defined or do not exist, traditional marketing tools are of limited use (Tidd et al 1997). Moreover, in business markets companies are less likely to conduct large scale surveys of customer needs; collaboration with individual customers or users is often a way to increase the chances of developing successful new products and technologies. Von Hippel’s seminal research on user-initiated (novel) innovation from the 1970s pointed out the dominant role of users in idea generation (1988), and his studies are now supported by a large number of empirical studies (e.g. Foxall and Tierney 1984; Shaw 1985; Biemans 1989; Voss 1985; Parkinson 1982; Gemünden et al 1996). These studies also contributed to an extension and refinement of Von Hippel’s early concept of CAP, extending the role of users to include not only idea generation, but in some cases all stages of product innovation. Also Håkansson’s research on supplier-customer interaction during technological development (1987;1989) has conceptualised and provided further empirical evidence for this stream of research. Von Hippel’s research has led to one particularly influential framework: the concept of ‘lead users’ (1986). These were defined as those users who a) face needs that will be general in a marketplace, but do so months or years before the bulk of the market and b) are positioned to benefit significantly by obtaining a solution to those needs. The research on manufacturercustomer interaction in innovation by Von Hippel and his peers, which started in the mid 1970s and continued into the 1980s, was seminal in that it blurred the picture of the initiator of innovation. It explored how the source of innovation can vary between industries and specific cases, but that one important source in many industries was the customer, or user, who took an active part in the whole innovation process. In the 1990s various attempts have been made to try to provide some form of guidelines for industry on how to manage interaction with customers. Biemans (1995) has identified a series of potential disadvantages of collaborative product development that are often ignored. These include increased dependency, increased cost of co-ordination, requirements of new management skills (most notably the ‘boundary spanner’), changed management of personnel (need to ensure co-operative behaviours), access to confidential information and proprietary skills, dominance by the partner, lack of commitment and loss of critical knowledge and skills. Biemans, however, asserted that a successful co-operation strategy can minimise most of these disadvantages, consisting of four key activities: partner selection, identifying and motivating the right person(s), formulating clear-cut agreements, and managing the on-going relationship. Biemans’ focus within these activities is on similarity of the parties involved balanced by complementarity to ensure compatibility. He also promoted the explicit clarification of the basis of the collaboration, including division of tasks, link with responsibilities, reasons for entering the partnership, goals, project life, contributions, divisions of costs and benefits etc.. Clear communication appeared to be Biemans’ main success factor. Apart from Von Hippel’s very specific concept and Biemans work, which tended to generalise on industry and situation specific findings, the research on manufacturer-user innovation to date has been largely descriptive. The presumed advantage of collaborating with customers for innovation relates to the generation of product ideas, information about user requirements, comments on new product concepts, assistance on development and testing of prototypes, and assistance in diffusion (Biemans 1989). However, it is not wellestablished whether and when those advantages pay off. Manufacturer-Supplier Collaboration Manufacturers are increasingly seeking to involve their suppliers in product and process development in an attempt to reduce development cost and time and increase product quality and value (Wynstra 1998). Development costs may be reduced by manufacturers pushing cost and responsibility towards the suppliers and, perhaps more importantly, by suppliers having superior knowledge of the components they supply i.e. specialised product and process technologies (Birou and Fawcett 1993). A range of studies have also shown how this (along with ‘concurrent engineering’ (O’Neal 1993)) may explain shorter development times and also improved quality (see for example Womack et al 1990; Clark et al 1987; Clark and Fujimoto 1991). Whereas the potential benefits are therefore plentiful it has been suggested and shown that involvement of suppliers in innovation may not always be advantageous (Birou 1994; Wynstra 1998). This indicates that whereas there are potential benefits of involving suppliers in innovation, companies are also likely to encounter problems. Håkansson and Eriksson (1993) (partly based on Håkansson 1989) presented four key issues in “getting innovations out of supplier networks”, relating to combining and integrating 145 It is not entirely clear whether these four activities relate to customer relationships only or whether there are also relevant for managing relationships with other parties such as suppliers and universities. different supplier relationships: Prioritising, Synchronising, Timing, and Mobilising. Wynstra (1998) later examined the same set of issues, translated them into ‘purchasing activities’, and added another key process: Informing. The problem of timing has been the subject of Bonaccorsi and Lipparini’s work on strategic partnerships in new product development (1994). Their work indicated that different activities may be performed differently at different stages of the innovation project. However, the assumptions that ‘the earlier the better’ is questionable; ‘timely’ involvement may be more appropriate (Wynstra 1998). Takeuchi and Nonaka (1986) and Imai et al (1985) elaborated on the challenge of learning but also indicated that some (mainly Japanese) companies co-ordinate and manage a large group of both first and second tier supplier during the development process; they argue that suppliers need to be run like a rugby team, maintaining cohesiveness and balance (like the internal development team). Their findings also highlighted the importance of information sharing (similar to Wynstra’s findings 1998), but seemed to place

01 Jan 2000
TL;DR: A new approach called Net of Irreversible Events in Discrete Time (NIEDT), for temporal reasoning in domains involving irreversible events, under which time is discretized, nodes are associated to events, and each value of a node represents the occurrence of an event at a particular instant.
Abstract: The usual way of applying Bayesian n etworks to the modelling of temporal processes consists in d iscretizing time a nd creating an instance of each random variable for each point in time. This method leads to large a nd complex networks. We present a new approach called Net of Irreversible Events in Discrete Time (NIEDT), for temporal reasoning in domains involving irreversible events. Under this approach, time is discretized, nodes are associated to events, and each value of a node represents the occurrence of an event at a particular instant; t his leads to more simple networks. We a lso d efine several t ypes of Temporal Noisy Gates, which facilitate the ac quisition and representation o f uncertain temporal knowledge.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: This work presents a "policy server" which is being used to provide centralized administration of packet voice gateways and "soft switches" in next generation circuit and packet telephony networks.
Abstract: Policies are increasingly being used to manage complex communication networks. In this paper we present our work on a "policy server" which is being used to provide centralized administration of packet voice gateways and "soft switches" in next generation circuit and packet telephony networks. The policies running in the policy server are specified using a domain independent policy description language (PDL). This paper is motivated by the problem of evaluating policies specified in PDL. We present an algorithm for evaluating policies and study both its theoretical and empirical behavior. We show that the problem of evaluating policies is quite intractable. However we note that the hard instances of the policy evaluation problem are quite rare in real world networks. Under some very realistic assumptions we are able to show that our policy evaluation algorithm is quite efficient and is well suited for enforcing policies in complex networks. These results constitute the first attempt to develop a formal framework to study the informal concepts of policy based network management.

Journal ArticleDOI
TL;DR: Two algorithms based on Freeman's clique–lattice analysis are described, one implements his technique exactly; the second is a modification that exploits the clique structure, thereby enabling the analysis of large complex networks.

Journal ArticleDOI
TL;DR: A feature-based object-oriented data model for transportation networks is proposed, which breaks the limitations of the arc-node planar graph model in feature description and topology expression of complicated networks.
Abstract: Networks can be used as a model for the representation and analysis of the physical world Traditional GIS data models need to be improved if GIS is to be more suitable for network modeling This paper proposes a feature-based object-oriented data model for transportation networks It breaks the limitations of the arc-node planar graph model in feature description and topology expression of complicated networks The basic classes in the data model are explicitly defined The difference and relationship between physical networks and virtual networks, and the topology representation in virtual networks are discussed An object-oriented network analysis method is used to construct virtual networks over physical networks in order to make abstract networks more suitable for analysis, and includes a detailed example of a multi-modal traffic network

Journal ArticleDOI
TL;DR: A typed -calculus in which computer networks can be formalized and directed at situations where the services available on the network are stationary, while the information can flow freely is introduced.
Abstract: In this paper we introduce a typed -calculus in which computer networks can be formalized and directed at situations where the services available on the network are stationary, while the information can flow freely. For this calculus, an analogue of the ‘propositions-as-types ’interpretation of constructive type theory holds with respect to information push and pull in computer networks: in the calculus a type represents a task that the user wants to carry out, while a term inhabiting this type represents a procedure that will yield the desired result. Under this interpretation, techniques for theorem proving can be used for finding a procedure to achieve a certain task on the network. Techniques for type checking can be used for checking a complex network program before running it. Reductions on terms can be used for finding alternative procedures to achieve a certain task. Terms constructed in this abstract calculus can be ‘compiled ’to procedures which are executable on the actual network. We show this for a simple Unix network.

Proceedings ArticleDOI
29 Oct 2000
TL;DR: This paper attention is focused on some convergence aspects of an improved version of the LEGO algorithm, which is guaranteed to converge to a local optimum of a newly formulated network objective function, that minimizes the total network transmit power subject to arbitrary channel capacity constraints.
Abstract: Locally enabled, globally optimized (LEGO) wireless networks offer paradigm shifting performance enhancements for wireless networks equipped with multiple antennas. In this paper attention is focused on some convergence aspects of an improved version of the LEGO algorithm. A technique is presented which is guaranteed to converge to a local optimum of a newly formulated network objective function, that minimizes the total network transmit power subject to arbitrary channel capacity constraints. Networks that possess channel reciprocity can efficiently implement the LEGO algorithm using highly localized information, obviating the need for complex network controllers. Moreover the LEGO algorithm can efficiently exploit MIMO channel and network topology diversity to multiply the capacity of the network. A numerical experiment is presented which suggests several orders of magnitude performance improvement over more conventional networks.

Journal ArticleDOI
TL;DR: In this article, the authors define complex technologies as those products or processes that cannot be understood in full detail by an individual expert sufficiently to communicate the details across time and space and identify three distinct patterns within which these networks operate: transformational, normal and transitional.
Abstract: Technology leaders need to view uncertainty and instability as the expected condition, failure as essential to learning, and rapid adaptability as the new bottom line. ovERVIEW: Participation in self-organizing networks of firms that exist to carry out the repeated innovation of complex technologies is becoming increasingly important. To achieve and maintain leadership, these networks must achieve a mutually reinforcing fit among organizational core capabilities, complementary assets, learning, and linkages to the external environment. Three distinct patterns have been identified within which these networks operate: transformational, normal and transitional. Each presents its own management opportunities and requirements, but all call for a premium to be placed on exploratory, experimental approaches to decision making. The economic success of companies today increasingly depends on participation in complex self organizing networks that innovate complex technologies. We define complex technologies as those products or processes that cannot be understood in full detail by an individual expert sufficiently to communicate the details across time and space. Aircraft and telecommunications equipment are common examples of complex product technologies; "lean" or "agile" production systems are examples of complex process technologies. Complex networks are those linked organizations (e.g., firms, universities and government agencies) that create, acquire and integrate the diverse knowledge and skills, both tacit and explicit, required to innovate complex technologies. Strategic alliances, joint ventures and other types of more informal collaboration are examples of complex networks. Self organization refers to the capacity these networks have for reordering themselves into more complex structures and for using more complex processes without centralized, detailed management guidance. The trend toward complexity is suggested by the fact that in 1970 complex technologies comprised 43 percent of the 30 most valuable world exports of goods, but by 1996 complex technologies represented 84 percent of those goods. The innovation of complex technologies is often characterized by rapid, highly disruptive, discontinuous change. We described a number of indicators of these disruptions, and provided some preliminary "rules of thumb" for managers confronting them, in a recent issue of Research Technology Management (1 ). Here we want to go a step further by outlining three distinct innovation patterns and characterizing the opportunities and challenges each poses for managers. We begin with two insights from the study of complex technological innovation. First, managers must be deeply immersed in any complex innovation process, but must avoid seeking to control it. The reason for this is that managers operate without sufficient understanding of the diverse knowledge needed to direct successful innovation in detail. Moreover, complex network organizations often react to direction in nonlinear ways-they may generate new adaptations in the form of novel organizational properties or characteristics (e.g., new ways of interaction among work groups) that are "emergent" and thus difficult to predict or guide. Rather, managers must try to affect the context of innovation-to establish and modify the direction and the boundaries within which effective, improvised and self organized solutions can evolve. Instead of trying to directly shape a strategy or decision, managers must shape the organizational environment within which these choices emerge (2). Second, the modification of the context of organizational choices is heavily dependent on the language that managers use. We believe a new terminology is one of the major contributions that the study of complexity makes to the management of complex organizations and their technologies. Terms like "self organization" or "emergence" gives voice and substance to the creative, dynamic, non-linear, and evolutionary work that is already typical of much management and is likely to become the focus of many more managers in the future (3). …

Book ChapterDOI
01 Jan 2000
TL;DR: In modern research in this field a new class of models, based on bio-computing and artificial intelligence, has recently come to the fore and demonstrated a high potential in modelling high-dimensional spatial networks.
Abstract: The analysis of complex networks has in recent years become an important research issue in spatial economics and regional science. An important methodological step forward in this context has been offered by synergetic theory and the relative dynamics concept of network evolution (see, for a review, Nijkamp and Reggiani 1998). These concepts have intensified the search for universal principles driving non-linear dynamic systems with a particular interest in methodological underpinnings and instruments. In modern research in this field a new class of models, based on bio-computing and artificial intelligence, has recently come to the fore. These new approaches demonstrated a high potential in modelling high-dimensional spatial networks.

Journal ArticleDOI
TL;DR: A web-based framework for the design and analysis of computer networks was developed that provides a flexible and robust environment for selecting and verifying the optimal solution from a large and complex solution space.
Abstract: The gradual acceptance of high-performance networks as a fundamental component of today's computing environment has allowed applications to evolve from static entities located on specific hosts to dynamic, distributed entities that are resident on one or more hosts. In addition, vital components of software and data used by an application may be distributed across the local/wide area network. Given such a fluid and dynamic environment, the design and analysis of high-performance communication networks (using off-the-shelf components offered by third party manufacturers) has been further complicated by the diversity of the available components. To alleviate these problems and to address the verification and validation issues involved in engineering such complex networks, a web-based framework for the design and analysis of computer networks was developed. Using the framework, a designer can explore design alternatives by constructing and analyzing configurations of the design using components offered by different researchers and manufacturers. The framework provides a flexible and robust environment for selecting and verifying the optimal solution from a large and complex solution space. This paper presents issues involved in the design and development of the framework.

Journal ArticleDOI
TL;DR: In this paper, a new subnetwork analysis is developed to determine multiple steady states in family members of complex reaction networks, which can be used to determine the capacity of more general family networks than the old one.
Abstract: A new version of subnetwork analysis is developed to determine multiple steady states in family members of complex reaction networks. In the analysis, one of its subnetworks admits a zero-eigenvalue steady state and its eigenvector is in the stoichiometric subspace. The new version can be used to determine the capacity of multiple steady states in more general family networks than the old one. Without the simplification of the complex network by the quasi-steady-state manipulation, this method improves the study of a large family of networks, instead of the case-by-case study. These advantages are demonstrated by an enzyme kinetic involving two substrates operating in an isothermal CSTR. The hysteresis and bifurcation diagrams of the studied reaction network are presented. The effects of rate constants on the steady-state multiplicity are discussed.

Journal ArticleDOI
TL;DR: This paper investigates the design of very large IP networks and establishes that there are many issues that need to be considered if an efficient, future-proof design is to be produced.
Abstract: BT has a well-established managed network solutions business Increasingly, corporate customers are looking for end-to-end solutions that deliver complete business processes However, the managed network will remain a key component of business transformation This paper investigates the design of very large IP networks It establishes that there are many issues that need to be considered if an efficient, future-proof design is to be produced The design of medium-sized networks is well understood and is a candidate for automation However, larger networks require a good understanding of the customer needs for the network and any existing systems that need to be included, and a comprehensive understanding of current and emerging IP and data technologies that are available


Book
01 Jan 2000
TL;DR: A new model for network management wherein various data communications protocols and mechanisms are easily handled as basic management tasks, thereby reducing expenditures of time, effort, and resources is found.
Abstract: From the Publisher: Taking an innovative approach to network management, communications protocols, and data networks, this new book examines a unique method in which protocols and mechanisms are broken down into simple management tasks. It helps you steer your way through the history of network management and communications protocols, and guides you through the design of new protocols and the construction of complex networks operating under a simpler paradigm. Using a new vocabulary to discuss tasks and mechanisms of data communications, the book eliminates the distinction between protocols and management, as well as the distinctions made in phases throughout a network's lifecycle. Here you will find a new model for network management wherein various data communications protocols and mechanisms are easily handled as basic management tasks, thereby reducing expenditures of time, effort, and resources. Using a bottom-up approach and avoiding rigorous mathematical devices, this book applies a new network control theory to all the layers you encounter in network planning, design, and management.

Book ChapterDOI
27 Sep 2000
TL;DR: This article discusses different architectural approaches for information systems supporting complex supply networks, and presents a first approach to a completely decentralised architecture with self co-ordinating units for complex networks of independent companies.
Abstract: As most of today’s products are manufactured in complex networks of independent companies, better collaboration and co-ordination across the network are important leavers for improved customer service and efficiency. In this article, we discuss different architectural approaches for information systems supporting complex supply networks. The business requirements are starting point for the evaluation. Three basic types of architectures can be determined: Completely centralised co-ordination, a hybrid architecture of local planning and control modules and central co-ordination, and a completely decentralised architecture with self co-ordinating units. The last architecture seems to be most promising for complex networks of independent companies. The article presents a first approach to this rather new architecture. The 5th-Framework IST-project Co-OPERATE will further advance the discussed concepts.