scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2000"


Proceedings Article
01 Jan 2000
TL;DR: Two different multiplicative algorithms for non-negative matrix factorization are analyzed and one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence.
Abstract: Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.

7,345 citations


Journal ArticleDOI
16 May 2000
TL;DR: A novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor is proposed and the top n points in this ranking are declared to be outliers.
Abstract: In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor. We rank each point on the basis of its distance to its kth nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality.

1,871 citations


Journal ArticleDOI
S. W. Roberts1
TL;DR: In this article, a graphical procedure for generating geometric moving averages is described in which the most recent observation is assigned a weight r, and all previous observations weights decreasing in geometric progression from the most recently back to the first.
Abstract: A geometrical moving average gives the most recent observation the greatest weight, and all previous observations weights decreasing in geometric progression from the most recent back to the first. A graphical procedure for generating geometric moving averages is described in which the most recent observation is assigned a weight r. The properties of control chart tests based on geometric moving averages are compared to tests based on ordinary moving averages.

1,490 citations


Journal ArticleDOI
TL;DR: This paper develops a robust hierarchical clustering algorithm ROCK that employs links and not distances when merging clusters, and indicates that ROCK not only generates better quality clusters than traditional algorithms, but it also exhibits good scalability properties.

1,383 citations


Journal ArticleDOI
22 Jun 2000-Nature
TL;DR: The model of cortical processing is presented as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex.
Abstract: Digital circuits such as the flip-flop use feedback to achieve multistability and nonlinearity to restore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the positive feedback inherent in its recurrent connections. Strong positive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplification of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex.

1,212 citations


Journal ArticleDOI
TL;DR: This work designs some multiple-antenna signal constellations and simulates their effectiveness as measured by bit-error probability with maximum-likelihood decoding and demonstrates that two antennas have a 6-dB diversity gain over one antenna at 15-dB SNR.
Abstract: Motivated by information-theoretic considerations, we propose a signaling scheme, unitary space-time modulation, for multiple-antenna communication links. This modulation is ideally suited for Rayleigh fast-fading environments, since it does not require the receiver to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T/spl times/M space-time signals (/spl Phi//sub i/, l=1, ..., L), where T represents the coherence interval during which the fading is approximately constant, and M

1,116 citations


Journal ArticleDOI
TL;DR: A framework for differential modulation with multiple antennas across a continuously fading channel, where neither the transmitter nor the receiver knows the fading coefficients is presented, and a class of diagonal signals where only one antenna is active at any time is introduced.
Abstract: We present a framework for differential modulation with multiple antennas across a continuously fading channel, where neither the transmitter nor the receiver knows the fading coefficients. The framework can be seen as a natural extension of standard differential phase-shift keying commonly used in single-antenna unknown-channel systems. We show how our differential framework links the unknown-channel system with a known-channel system, and we develop performance design criteria. As a special ease, we introduce a class of diagonal signals where only one antenna is active at any time, and demonstrate how these signals may be used to achieve full transmitter diversity and low probability of error.

956 citations


Journal ArticleDOI
TL;DR: Suboptimal strategies for combining partial transmit sequences that achieve similar performance but with reduced complexity are presented.
Abstract: Orthogonal frequency-division multiplexing (OFDM) is an attractive technique for achieving high-bit-rate wireless data transmission. However, the potentially large peak-to-average power ratio (PAP) has limited its application. Recently, two promising techniques for improving the PAP statistics of an OFDM signal have been proposed: the selective mapping and partial transmit sequence approaches. Here, we present suboptimal strategies for combining partial transmit sequences that achieve similar performance but with reduced complexity.

896 citations


Journal ArticleDOI
D.L. Duttweiler1
TL;DR: On typical echo paths, the proportionate normalized least-mean-squares (PNLMS) adaptation algorithm converges significantly faster than the normalized at-a-glance-time (NLMS) algorithm generally used in echo cancelers to date.
Abstract: On typical echo paths, the proportionate normalized least-mean-squares (PNLMS) adaptation algorithm converges significantly faster than the normalized least-mean-squares (NLMS) algorithm generally used in echo cancelers to date. In PNLMS adaptation, the adaptation gain at each tap position varies from position to position and is roughly proportional at each tap position to the absolute value of the current tap weight estimate. The total adaptation gain being distributed over the taps is carefully monitored and controlled so as to hold the adaptation quality (misadjustment noise) constant. PNLMS adaptation only entails a modest increase in computational complexity.

862 citations


Journal ArticleDOI
TL;DR: This paper proposes a systematic method for creating constellations of unitary space-time signals for multiple-antenna communication links and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space.
Abstract: We propose a systematic method for creating constellations of unitary space-time signals for multiple-antenna communication links. Unitary space-time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transmitter nor the receiver knows the fading coefficients. The signals can achieve low probability of error by exploiting multiple-antenna diversity. Because the fading coefficients are not known, the criterion for creating and evaluating the constellation is nonstandard and differs markedly from the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation-an oblong complex-valued matrix whose columns are orthonormal-and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space. This construction easily produces large constellations of high-dimensional signals. We demonstrate its efficacy through examples involving one, two, and three transmitter antennas.

761 citations


Proceedings ArticleDOI
01 Jul 2000
TL;DR: A new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning, coupled with semi-regular wavelet transforms, zerotree coding, and subdivision based reconstruction sees improvements in error by a factor four compared to other progressive coding schemes.
Abstract: We propose a new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning. We observe that meshes consist of three distinct components: geometry, parameter, and connectivity information. The latter two do not contribute to the reduction of error in a compression setting. Using semi-regular meshes, parameter and connectivity information can be virtually eliminated. Coupled with semi-regular wavelet transforms, zerotree coding, and subdivision based reconstruction we see improvements in error by a factor four (12dB) compared to other progressive coding schemes.

Book ChapterDOI
Arthur Ashkin1
TL;DR: The history of optical trapping and manipulation of small-neutral particles is reviewed in this paper, from the time of its origin in 1970 up to the present, and the unique characteristics of this technique are having a major impact on the many subfields of physics, chemistry, and biology where small particles play a role.
Abstract: Reviews the history of optical trapping and manipulation of small-neutral particles, from the time of its origin in 1970 up to the present. As we shall see, the unique characteristics of this technique are having a major impact on the many subfields of physics, chemistry, and biology where small particles play a role.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: This analysis of the development process of the Apache web server reveals a unique process, which performs well on important measures, and concludes that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.
Abstract: According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.

Journal ArticleDOI
TL;DR: The experimental results show that it is indeed possible to find a simple criterion, a state space representation, and a simulated user parameterization in order to automatically learn a relatively complex dialog behavior, similar to one that was heuristically designed by several research groups.
Abstract: We propose a quantitative model for dialog systems that can be used for learning the dialog strategy. We claim that the problem of dialog design can be formalized as an optimization problem with an objective function reflecting different dialog dimensions relevant for a given application. We also show that any dialog system can be formally described as a sequential decision process in terms of its state space, action set, and strategy. With additional assumptions about the state transition probabilities and cost assignment, a dialog system can be mapped to a stochastic model known as Markov decision process (MDP). A variety of data driven algorithms for finding the optimal strategy (i.e., the one that optimizes the criterion) is available within the MDP framework, based on reinforcement learning. For an effective use of the available training data we propose a combination of supervised and reinforcement learning: the supervised learning is used to estimate a model of the user, i.e., the MDP parameters that quantify the user's behavior. Then a reinforcement learning algorithm is used to estimate the optimal strategy while the system interacts with the simulated user. This approach is tested for learning the strategy in an air travel information system (ATIS) task. The experimental results we present in this paper show that it is indeed possible to find a simple criterion, a state space representation, and a simulated user parameterization in order to automatically learn a relatively complex dialog behavior, similar to one that was heuristically designed by several research groups.

Proceedings ArticleDOI
Babak Hassibi1
05 Jun 2000
TL;DR: An efficient square-root algorithm for the nulling and cancellation step of BLAST is developed, which reduces the computational cost of the scheme by 0.7 M, and makes it attractive for implementation in fixed-point (rather than floating-point) architectures.
Abstract: Bell Labs Layered Space-Time (BLAST) is a scheme for transmitting information over a rich-scattering wireless environment using multiple receive and transmit antennas. The main computational bottleneck in the BLAST algorithm is a "nulling and cancellation" step, where the optimal ordering for the sequential estimation and detection of the received signals is determined. To reduce the computational cost of BLAST, we develop an efficient square-root algorithm for the nulling and cancellation step. The main features of the algorithm include efficiency: the computational cost is reduced by 0.7 M, where M is the number of transmit antennas, and numerical stability: the algorithm is division-free and uses only orthogonal transformations. In a 14 antenna system designed for transmission of 1 Mbit/s over a 30 kHz channel, the nulling and cancellation computation is reduced from 190 MFlops/s to 19 MFlops/s, with the overall computations being reduced from 220 MFlops/s to 49 MFlops/s. The numerical stability of the algorithm also make it attractive for implementation in fixed-point (rather than floating-point) architectures.

Journal ArticleDOI
TL;DR: It is demonstrated that the entire optical network design problem can be considerably simplified and made computationally tractable, and that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions.
Abstract: We present algorithms for the design of optimal virtual topologies embedded on wide-area wavelength-routed optical networks. The physical network architecture employs wavelength-conversion-enabled wavelength-routing switches (WRS) at the routing nodes, which allow the establishment of circuit-switched all-optical wavelength-division multiplexed (WDM) channels, called lightpaths. We assume packet-based traffic in the network, such that a packet travelling from its source to its destination may have to multihop through one or more such lightpaths. We present an exact integer linear programming (ILP) formulation for the complete virtual topology design, including choice of the constituent lightpaths, routes for these lightpaths, and intensity of packet flows through these lightpaths. By minimizing the average packet hop distance in our objective function and by relaxing the wavelength-continuity constraints, we demonstrate that the entire optical network design problem can be considerably simplified and made computationally tractable. Although an ILP may take an exponential amount of time to obtain an exact optimal solution, we demonstrate that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions. We ran experiments using the CPLEX optimization package on the NSFNET topology, a subset of the PACBELL network topology, as well as a third random topology to substantiate this conjecture. Minimizing the average packet hop distance is equivalent to maximizing the total network throughput under balanced flows through the lightpaths. The problem formulation can be used to design a balanced network, such that the utilizations of both transceivers and wavelengths in the network are maximized, thus reducing the cost of the network equipment. We analyze the trade-offs in budgeting of resources (transceivers and switch sizes) in the optical network, and demonstrate how an improperly designed network may have low utilization of any one of these resources. We also use the problem formulation to provide a reconfiguration methodology in order to adapt the virtual topology to changing traffic conditions.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: This paper presents a new algorithm for dynamic routing of bandwidth-guaranteed tunnels when tunnel routing requests arrive one-by-one and there is no a priori knowledge regarding future requests, and shows that this problem is NP-hard.
Abstract: This paper presents a new algorithm for dynamic routing of bandwidth-guaranteed tunnels when tunnel routing requests arrive one-by-one and there is no a priori knowledge regarding future requests. This problem is motivated by service provider needs for fast deployment of bandwidth-guaranteed services and the consequent need in backbone networks for fast provisioning of bandwidth-guaranteed paths. Offline routing algorithms cannot be used since they require a priori knowledge of all tunnel requests that are to be routed. Instead, on-line algorithms that handle requests arriving one-by-one and that satisfy as many potential future demands as possible are needed. The newly developed algorithm is an on-line algorithm and is based on the idea that a newly routed tunnel must follow a route that does not "interfere too much" with a route that may be critical to satisfy a future demand. We show that this problem is NP-hard. We then develop a path selection heuristic that is based on the idea of deferred loading of certain "critical" links. These critical links are identified by the algorithm as links that, if heavily loaded, would make it impossible to satisfy future demands between certain ingress-egress pairs. Like min-hop routing, the presented algorithm uses link-state information and some auxiliary capacity information for path selection. Unlike previous algorithms, the proposed algorithm exploits any available knowledge of the network ingress-egress points of potential future demands even though the demands themselves are unknown.

Proceedings ArticleDOI
Mockus1, Votta
01 Jan 2000
TL;DR: From this study several suggestions are arrived at on how to make version control data useful in diagnosing the state of a software project, without significantly increasing the overhead for the developer using the change management system.
Abstract: Large scale software products must constantly change in order to adapt to a changing environment. Studies of historic data from legacy software systems have identified three specific causes of this change: adding new features; correcting faults; and restructuring code to accommodate future changes. Our hypothesis is that a textual description field of a change is essential to understanding why that change was performed. Also, we expect that difficulty, size, and interval would vary strongly across different types of changes. To test these hypotheses we have designed a program which automatically classifies maintenance activity based on a textual description of changes. Developer surveys showed that the automatic classification was in agreement with developer opinions. Tests of the classifier on a different product found that size and interval for different types of changes did not vary across two products. We have found strong relationships between the type and size of a change and the time required to carry it out. We also discovered a relatively large amount of perfective changes in the system we examined. From this study we have arrived at several suggestions on how to make version control data useful in diagnosing the state of a software project, without significantly increasing the overhead for the developer using the change management system.

Journal ArticleDOI
16 May 2000
TL;DR: This paper proposes three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic that incorporates novel optimizations that improve efficiency greatly.
Abstract: Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space.In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.

Book ChapterDOI
22 Aug 2000
TL;DR: A family of optimizations implemented in a translation from a linear temporal logic to Buchi automata can enhance the efficiency of model checking, as practiced in tools such as SPIN.
Abstract: We describe a family of optimizations implemented in a translation from a linear temporal logic to Buchi automata. Such optimized automata can enhance the efficiency of model checking, as practiced in tools such as SPIN. Some of our optimizations are applied during preprocessing of temporal formulas, while other key optimizations are applied directly to the resulting Buchi automata independent of how they arose. Among these latter optimizations we apply a variant of fair simulation reduction based on color refinement. We have implemented our optimizations in a translation of an extension to LTL described in [Ete99]. Inspired by this work, a subset of the optimizations outlined here has been added to a recent version of SPIN. Both implementations begin with an underlying algorithm of [GPVW95]. We describe the results of tests we have conducted, both to see how the optimizations improve the sizes of resulting automata, as well as to see how the smaller sizes for the automata affect the running time of SPIN's explicit state model checking algorithm. Our translation is available via a web-server which includes a GUI that depicts the resulting automata: http://cm.bell-labs.com/cm/cs/what/spin/eqltl.html

Proceedings ArticleDOI
01 Jul 2000
TL;DR: This work presents an algorithm to approximate any surface arbitrarily closely with a normal semi-regular mesh, which is useful in numerous applications such as compression, filtering, rendering, texturing, and modeling.
Abstract: Normal meshes are new fundamental surface descriptions inspired by differential geometry. A normal mesh is a multiresolution mesh where each level can be written as a normal offset from a coarser version. Hence the mesh can be stored with a single float per vertex. We present an algorithm to approximate any surface arbitrarily closely with a normal semi-regular mesh. Normal meshes can be useful in numerous applications such as compression, filtering, rendering, texturing, and modeling.

Journal ArticleDOI
TL;DR: There is a surprising consistency over time in the relative amount of web traffic from the server along a path, lending a stability to the TERC location solution and these techniques can be used by network providers to reduce traffic load in their network.
Abstract: This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number of caches in the network. We formulate these location problems both for general caches and for transparent en-route caches (TERCs), and identify that, in general, they are intractable. We give optimal algorithms for line and ring networks, and present closed form formulae for some special cases. We also present a computationally efficient dynamic programming algorithm for the single server case. This last case is of particular practical interest. It models a network that wishes to minimize the average access delay for a single web server. We experimentally study the effects of our algorithm using real web server data. We observe that a small number of TERCs are sufficient to reduce the network traffic significantly. Furthermore, there is a surprising consistency over time in the relative amount of web traffic from the server along a path, lending a stability to our TERC location solution. Our techniques can be used by network providers to reduce traffic load in their network.

Journal ArticleDOI
TL;DR: Data rate adaptation procedures for CDMA (IS-95), widebandCDMA (cdma2000 and UMTS WCDMA), TDMA (is-136), and GSM (GPRS and EDGE) are described.
Abstract: Today's cellular systems are designed to achieve 90-95 percent coverage for voice users (i.e., the ratio of signal to interference plus noise must be above a design target over 90 to 95 percent of the cell area). This ensures that the desired data rate which achieves good voice quality can be provided "everywhere". As a result, SINRs that are much larger than the target are achieved over a large portion of the cellular coverage area. For a packet data service, the larger SINR can be used to provide higher data rates by reducing coding or spreading and/or increasing the constellation density. It is straight-forward to see that cellular spectral efficiency (in terms of b/s/Hz/sector) can be increased by a factor of two or more if users with better links are served at higher data rates. Procedures that exploit this are already in place for all the major cellular standards in the world. In this article, we describe data rate adaptation procedures for CDMA (IS-95), wideband CDMA (cdma2000 and UMTS WCDMA), TDMA (IS-136), and GSM (GPRS and EDGE).

Journal ArticleDOI
Audris Mockus1, David M. Weiss1
TL;DR: The model is built on historic information and is used to predict the risk of new changes in 5ESS® software updates and finds that change diffusion and developer experience are essential to predicting failures.
Abstract: Reducing the number of software failures is one of the most challenging problems of software production. We assume that software development proceeds as a series of changes and model the probability that a change to software will cause a failure. We use predictors based on the properties of a change itself. Such predictors include size in lines of code added, deleted, and unmodified; diffusion of the change and its component subchanges, as reflected in the number of files, modules, and subsystems touched, or changed; several measures of developer experience; and the type of change and its subchanges (fault fixes or new code). The model is built on historic information and is used to predict the risk of new changes. In this paper we apply the model to 5ESS® software updates and find that change diffusion and developer experience are essential to predicting failures. The predictive model is implemented as a Web-based tool to allow timely prediction of change quality. The ability to predict the quality of change enables us to make appropriate decisions regarding inspection, testing, and delivery. Historic information on software changes is recorded in many commercial software projects, suggesting that our results can be easily and widely applied in practice.

Journal ArticleDOI
TL;DR: In this paper, a simple but powerful evanescent-mode analysis showed that the length /spl lambda/ over which the source and drain perturb the channel potential, is 1/spl pi/ of the effective device thickness in the double-gate case, and 1/4.810 of the cylindrical case, in excellent agreement with PADRE device simulations.
Abstract: Short-channel effects in fully-depleted double-gate (DG) and cylindrical, surrounding-gate (Cyl) MOSFETs are governed by the electrostatic potential as confined by the gates, and thus by the device dimensions. The simple but powerful evanescent-mode analysis shows that the length /spl lambda/, over which the source and drain perturb the channel potential, is 1//spl pi/ of the effective device thickness in the double-gate case, and 1/4.810 of the effective diameter in the cylindrical case, in excellent agreement with PADRE device simulations. Thus for equivalent silicon and gate oxide thicknesses, evanescent-mode analysis indicates that Cyl-MOSFETs can be scaled to 35% shorter channel lengths than DG-MOSFETs.

Posted Content
TL;DR: This work defines and shows how to construct nonbinary quantum stabilizer codes, a relationship between self-orthogonal codes over F(q/sup 2/) and q-ary quantum codes for any prime power q, based on nonbinary error bases.
Abstract: We define and show how to construct nonbinary quantum stabilizer codes. Our approach is based on nonbinary error bases. It generalizes the relationship between selforthogonal codes over $GF_{4}$ and binary quantum codes to one between selforthogonal codes over $GF_{q^2}$ and $q$-ary quantum codes for any prime power $q$.

Journal ArticleDOI
A.L. Chiu1, Eytan Modiano
TL;DR: This work develops traffic grooming algorithms for unidirectional SONET/WDM ring networks and obtains an optimal algorithm which minimizes the number of ADM's by efficiently multiplexing and switching the traffic at the hub.
Abstract: We develop traffic grooming algorithms for unidirectional SONET/WDM ring networks. The objective is to assign calls to wavelengths in a way that minimizes the total cost of electronic equipment [e.g., the number of SONET add/drop multiplexers (ADM's)]. We show that the general traffic grooming problem is NP-complete. However, for some special cases we obtain algorithms that result in a significant reduction in the number of ADM's. When the traffic from all nodes is destined to a single node, and all traffic rates are the same, we obtain a solution that minimizes the number of ADM's. In the more general case of all-to-all uniform frame we obtain a lower bound on the number of ADM's required, and provide a heuristic algorithm that performs closely to that bound. To account for more realistic traffic scenarios, we also consider distance dependent traffic, where the traffic load between two nodes is inversely proportional to the distance between them, and again provide a nearly optimal heuristic algorithm that results in substantial ADM savings. Finally, we consider the use of a hub node, where traffic can be switched between different wavelength, and obtain an optimal algorithm which minimizes the number of ADM's by efficiently multiplexing and switching the traffic at the hub. Moreover, we show that any solution not using a hub can be transformed into a solution with a hub using fewer or the same number of ADM's.

Proceedings ArticleDOI
14 May 2000
TL;DR: The software allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one) and operates on a more understandable level of abstraction, and it deals with all the firewalls at once.
Abstract: Today, even a moderately sized corporate intranet contains multiple firewalls and routers, which are all used to enforce various aspects of the global corporate security policy. Configuring these devices to work in unison is difficult, especially if they are made by different vendors. Even testing or reverse engineering an existing configuration (say when a new security administrator takes over) is hard. Firewall configuration files are written in low level formalisms, whose readability is comparable to assembly code, and the global policy is spread over all the firewalls that are involved. To alleviate some of these difficulties, we designed and implemented a novel firewall analysis tool. Our software allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one). Our tool uses a minimal description of the network topology and directly parses the various vendor-specific low level configuration files. It interacts with the user through a query-and-answer session, which is conducted at a much higher level of abstruction. A typical question our tool can answer is "from which machines can our DMZ be reached and with which services?" Thus, the tool complements existing vulnerability analysis tools, as it can be used before a policy is actually deployed it operates on a more understandable level of abstraction, and it deals with all the firewalls at once.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: It is shown that a partial information scenario which uses only aggregated and not per-path information provides sufficient information for a suitably developed algorithm to be able to perform almost as well as the complete information scenario.
Abstract: This paper presents new algorithms for dynamic routing of restorable bandwidth-guaranteed paths. A straightforward solution for the restoration problem is to find two disjoint paths. However, this results in excessive resource usage for backup paths and does not satisfy the implicit service provider requirement of optimizing network resource utilization so as to increase the number of potential future demands that can be routed. We give an integer programming formulation for this problem which is new. Complete path routing knowledge is a reasonable assumption for a centralized routing algorithm. However, it requires maintenance of non-aggregated or per-path information which is not often desirable particularly when distributed routing is preferred. We show that a partial information scenario which uses only aggregated and not per-path information provides sufficient information for a suitably developed algorithm to be able to perform almost as well as the complete information scenario. In this partial information scenario the routing algorithm only knows what fraction of each link's bandwidth, is currently used by active paths, and is currently used by backup paths. Obtaining this information is feasible using proposed traffic engineering extensions to routing protocols. We formulate the dynamic restorable bandwidth routing problem in this partial information scenario and develop efficient routing algorithms. We compare there routing performance of this algorithm to a bound obtained using complete information. Our partial information-based algorithm performs very well and its performance in terms of the number of rejected requests is very close to the full information bound.

Proceedings ArticleDOI
01 Feb 2000
TL;DR: The main result of this paper is a polynomial time approximation scheme for MKP, which helps demarcate the boundary at which instances of GAP become APX-hard.
Abstract: The Multiple Knapsack problem (MKP) is a natural and well known generalization of the single knapsack problem and is defined as follows. We are given a set of items and bins (knapsacks) such that each item has a profit and a size , and each bin has a capacity . The goal is to find a subset of items of maximum profit such that they have a feasible packing in the bins. MKP is a special case of the Generalized Assignment problem (GAP) where the profit and the size of an item can vary based on the specific bin that it is assigned to. GAP is APX-hard and a -approximation for it is implicit in the work of Shmoys and Tardos [26], and thus far, this was also the best known approximation for MKP. The main result of this paper is a polynomial time approximation scheme for MKP. Apart from its inherent theoretical interest as a common generalization of the well-studied knapsack and bin packing problems, it appears to be the strongest special case of GAP that is not APX-hard. We substantiate this by showing that slight generalizations of MKP are APX-hard. Thus our results help demarcate the boundary at which instances of GAP become APX-hard. An interesting aspect of our approach is a ptas-preserving reduction from an arbitrary instance of MKP to an instance with distinct sizes and profits.