scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 2006"


Journal ArticleDOI
TL;DR: The normalized and signed gradient dynamical systems associated with a differentiable function are characterized and their asymptotic convergence properties are identified and conditions that guarantee finite-time convergence are identified.

779 citations


01 Feb 2006
TL;DR: This document describes a method by which a Service Provider may use an IP backbone to provide IP Virtual Private Networks (VPNs) for its customers using a "peer model", in which the customers' edge routers send their routes to the Service Provider's edge routers (PE routers).
Abstract: This document describes a method by which a Service Provider may use an IP backbone to provide IP Virtual Private Networks (VPNs) for its customers. This method uses a "peer model", in which the customers' edge routers (CE routers) send their routes to the Service Provider's edge routers (PE routers); there is no "overlay" visible to the customer's routing algorithm, and CE routers at different sites do not peer with each other. Data packets are tunneled through the backbone, so that the core routers do not need to know the VPN routes. [STANDARDS-TRACK]

463 citations


Proceedings ArticleDOI
01 Dec 2006
TL;DR: It is discovered that the more complex proportional-integral algorithm has performance benefits over the simpler proportional algorithm.
Abstract: We analyze two different estimation algorithms for dynamic average consensus in sensing and communication networks, a proportional algorithm and a proportional-integral algorithm. We investigate the stability properties of these estimators under changing inputs and network topologies as well as their convergence properties under constant or slowly-varying inputs. In doing so, we discover that the more complex proportional-integral algorithm has performance benefits over the simpler proportional algorithm

448 citations


Journal ArticleDOI
TL;DR: This paper analyzes these new methods for symmetric positive definite problems and shows their relation to other modern domain decomposition methods like the new Finite Element Tearing and Interconnect (FETI) variants.
Abstract: Optimized Schwarz methods are a new class of Schwarz methods with greatly enhanced convergence properties. They converge uniformly faster than classical Schwarz methods and their convergence rates dare asymptotically much better than the convergence rates of classical Schwarz methods if the overlap is of the order of the mesh parameter, which is often the case in practical applications. They achieve this performance by using new transmission conditions between subdomains which greatly enhance the information exchange between subdomains and are motivated by the physics of the underlying problem. We analyze in this paper these new methods for symmetric positive definite problems and show their relation to other modern domain decomposition methods like the new Finite Element Tearing and Interconnect (FETI) variants.

446 citations


Journal ArticleDOI
TL;DR: A PDE-based level set method that needs to minimize a smooth convex functional under a quadratic constraint, and shows numerical results using the method for segmentation of digital images.
Abstract: In this paper, we propose a PDE-based level set method. Traditionally, interfaces are represented by the zero level set of continuous level set functions. Instead, we let the interfaces be represented by discontinuities of piecewise constant level set functions. Each level set function can at convergence only take two values, i.e., it can only be 1 or -1; thus, our method is related to phase-field methods. Some of the properties of standard level set methods are preserved in the proposed method, while others are not. Using this new method for interface problems, we need to minimize a smooth convex functional under a quadratic constraint. The level set functions are discontinuous at convergence, but the minimization functional is smooth. We show numerical results using the method for segmentation of digital images.

382 citations


Journal ArticleDOI
TL;DR: In this paper, the radial basis functions (RBFs) in scattered data fitting and function approximation are incorporated into the conventional level set methods to construct a more efficient approach for structural topology optimization.
Abstract: Level set methods have become an attractive design tool in shape and topology optimization for obtaining lighter and more efficient structures. In this paper, the popular radial basis functions (RBFs) in scattered data fitting and function approximation are incorporated into the conventional level set methods to construct a more efficient approach for structural topology optimization. RBF implicit modelling with multiquadric (MQ) splines is developed to define the implicit level set function with a high level of accuracy and smoothness. A RBF-level set optimization method is proposed to transform the Hamilton-Jacobi partial differential equation (PDE) into a system of ordinary differential equations (ODEs) over the entire design domain by using a collocation formulation of the method of lines. With the mathematical convenience, the original time dependent initial value problem is changed to an interpolation problem for the initial values of the generalized expansion coefficients. A physically meaningful and efficient extension velocity method is presented to avoid possible problems without reinitialization in the level set methods. The proposed method is implemented in the framework of minimum compliance design that has been extensively studied in topology optimization and its efficiency and accuracy over the conventional level set methods are highlighted. Numerical examples show the success of the present RBF-level set method in the accuracy, convergence speed and insensitivity to initial designs in topology optimization of two-dimensional (2D) structures. It is suggested that the introduction of the radial basis functions to the level set methods can be promising in structural topology optimization.

295 citations


Journal ArticleDOI
11 Aug 2006
TL;DR: A multi-path inter-domain routing protocol called MIRO is presented that offers substantial flexiility, while giving transit domains control over the flow of traffic through their infrastructure and avoiding state explosion in disseminating reachability information.
Abstract: The Internet consists of thousands of independent domains with different, and sometimes competing, business interests. However, the current interdomain routing protocol (BGP) limits each router to using a single route for each destination prefix, which may not satisfy the diverse requirements of end users. Recent proposals for source routing offer an alternative where end hosts or edge routers select the end-to-end paths. However, source routing leaves transit domains with very little control and introduces difficult scalability and security challenges. In this paper, we present a multi-path inter-domain routing protocol called MIRO that offers substantial flexiility, while giving transit domains control over the flow of traffic through their infrastructure and avoiding state explosion in disseminating reachability information. In MIRO, routers learn default routes through the existing BGP protocol, and arbitrary pairs of domains can negotiate the use of additional paths (bound to tunnels in the data plane) tailored to their special needs. MIRO retains the simplicity of BGP for most traffic, and remains backwards compatible with BGP to allow for incremental deployability. Experiments with Internet topology and routing data illustrate that MIRO offers tremendous flexibility for path selection with reasonable overhead.

290 citations


Journal ArticleDOI
TL;DR: A general result is proved on global exponential convergence toward a unique equilibrium point of the neural network solutions by means of the Lyapunov-like approach and new results on global convergence in finite time are established.

286 citations


Journal ArticleDOI
TL;DR: A fully derivative-free spectral residual method for solving largescale nonlinear systems of equations that uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonothone behavior.
Abstract: A fully derivative-free spectral residual method for solving largescale nonlinear systems of equations is presented. It uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonotone behavior. The global convergence analysis of the combined scheme is presented. An extensive set of numerical experiments that indicate that the new combination is competitive and frequently better than well-known Newton-Krylov methods for largescale problems is also presented.

275 citations


Journal ArticleDOI
TL;DR: This correspondence addresses the problem of locating an acoustic source using a sensor network in a distributed manner without transmitting the full data set to a central point for processing by applying a distributed version of the projection-onto-convex-sets (POCS) method.
Abstract: This correspondence addresses the problem of locating an acoustic source using a sensor network in a distributed manner, i.e., without transmitting the full data set to a central point for processing. This problem has been traditionally addressed through the maximum-likelihood framework or nonlinear least squares. These methods, even though asymptotically optimal under certain conditions, pose a difficult global optimization problem. It is shown that the associated objective function may have multiple local optima and saddle points, and hence any local search method might stagnate at a suboptimal solution. In this correspondence, we formulate the problem as a convex feasibility problem and apply a distributed version of the projection-onto-convex-sets (POCS) method. We give a closed-form expression for the projection phase, which usually constitutes the heaviest computational aspect of POCS. Conditions are given under which, when the number of samples increases to infinity or in the absence of measurement noise, the convex feasibility problem has a unique solution at the true source location. In general, the method converges to a limit point or a limit cycle in the neighborhood of the true location. Simulation results show convergence to the global optimum with extremely fast convergence rates compared to the previous methods

272 citations


Journal ArticleDOI
TL;DR: This work provides a convergence analysis for widely used registration algorithms such as ICP, using either closest points or tangent planes at closest points and for a recently developed approach based on quadratic approximants of the squared distance function.
Abstract: The computation of a rigid body transformation which optimally aligns a set of measurement points with a surface and related registration problems are studied from the viewpoint of geometry and optimization. We provide a convergence analysis for widely used registration algorithms such as ICP, using either closest points (Besl and McKay, 1992) or tangent planes at closest points (Chen and Medioni, 1991) and for a recently developed approach based on quadratic approximants of the squared distance function (Pottmann et al., 2004). ICP based on closest points exhibits local linear convergence only. Its counterpart which minimizes squared distances to the tangent planes at closest points is a Gauss---Newton iteration; it achieves local quadratic convergence for a zero residual problem and--if enhanced by regularization and step size control--comes close to quadratic convergence in many realistic scenarios. Quadratically convergent algorithms are based on the approach in (Pottmann et al., 2004). The theoretical results are supported by a number of experiments; there, we also compare the algorithms with respect to global convergence behavior, stability and running time.

Journal ArticleDOI
TL;DR: The paper considers the objective of optimally specifying redundant control effectors under constraints, a problem commonly referred to as control allocation, posed as a mixed /spl lscr//sub 2/-norm optimization objective and converted to a quadratic programming formulation.
Abstract: The paper considers the objective of optimally specifying redundant control effectors under constraints, a problem commonly referred to as control allocation. The problem is posed as a mixed /spl lscr//sub 2/-norm optimization objective and converted to a quadratic programming formulation. The implementation of an interior-point algorithm is presented. Alternative methods including fixed-point and active set methods are used to evaluate the reliability, accuracy and efficiency of a primal-dual interior-point method. While the computational load of the interior-point method is found to be greater for problems of small size, convergence to the optimal solution is also more uniform and predictable. In addition, the properties of the algorithm scale favorably with problem size.

Journal ArticleDOI
TL;DR: This paper reviews the literature on deterministic and stochastic stepsize rules, and derives formulas for optimal stepsizes for minimizing estimation error, and an approximation is proposed for the case where the parameters are unknown.
Abstract: We address the problem of determining optimal stepsizes for estimating parameters in the context of approximate dynamic programming. The sufficient conditions for convergence of the stepsize rules have been known for 50 years, but practical computational work tends to use formulas with parameters that have to be tuned for specific applications. The problem is that in most applications in dynamic programming, observations for estimating a value function typically come from a data series that can be initially highly transient. The degree of transience affects the choice of stepsize parameters that produce the fastest convergence. In addition, the degree of initial transience can vary widely among the value function parameters for the same dynamic program. This paper reviews the literature on deterministic and stochastic stepsize rules, and derives formulas for optimal stepsizes for minimizing estimation error. This formula assumes certain parameters are known, and an approximation is proposed for the case where the parameters are unknown. Experimental work shows that the approximation provides faster convergence than other popular formulas.

Journal ArticleDOI
TL;DR: Methods for improving efficiency through the use of partially converged computational fluid dynamics results allow surrogate models to be built in a fraction of the time required for models based on converged results.
Abstract: Efficient methods for global aerodynamic optimization using computational fluid dynamics simulations should aim to reduce both the time taken to evaluate design concepts and the number of evaluations needed for optimization. This paper investigates methods for improving such efficiency through the use of partially converged computational fluid dynamics results. These allow surrogate models to be built in a fraction of the time required for models based on converged results. The proposed optimization methodologies increase the speed of convergence to a global optimum while the computer resources expended in areas of poor designs are reduced. A strategy which combines a global approximation built using partially converged simulations with expected improvement updates of converged simulations is shown to outperform a traditional surrogate-based optimization.

Journal ArticleDOI
TL;DR: This paper shows that the good convergence properties of the one-unit case are also shared by the full algorithm with symmetrical normalization and the global behavior is illustrated numerically for two sources and two mixtures in several typical cases.
Abstract: The fast independent component analysis (FastICA) algorithm is one of the most popular methods to solve problems in ICA and blind source separation. It has been shown experimentally that it outperforms most of the commonly used ICA algorithms in convergence speed. A rigorous local convergence analysis has been presented only for the so-called one-unit case, in which just one of the rows of the separating matrix is considered. However, in the FastICA algorithm, there is also an explicit normalization step, and it may be questioned whether the extra rotation caused by the normalization will affect the convergence speed. The purpose of this paper is to show that this is not the case and the good convergence properties of the one-unit case are also shared by the full algorithm with symmetrical normalization. A local convergence analysis is given for the general case, and the global behavior is illustrated numerically for two sources and two mixtures in several typical cases

Proceedings ArticleDOI
19 Apr 2006
TL;DR: This work proposes a space-time diffusion scheme that relies only on peer-to-peer communication, and allows every node to asymptotically compute the global maximum-likelihood estimate of the unknown parameters of a sensor network, corrupted by independent Gaussian noises.
Abstract: We consider a sensor network in which each sensor takes measurements, at various times, of some unknown parameters, corrupted by independent Gaussian noises. Each node can take a finite or infinite number of measurements, at arbitrary times (i.e., asynchronously). We propose a space-time diffusion scheme that relies only on peer-to-peer communication, and allows every node to asymptotically compute the global maximum-likelihood estimate of the unknown parameters. At each iteration, information is diffused across the network by a temporal update step and a spatial update step. Both steps update each node's state by a weighted average of its current value and locally available data: new measurements for the time update, and neighbors' data for the spatial update. At any time, any node can compute a local weighted least-squares estimate of the unknown parameters, which converges to the global maximum-likelihood solution. With an infinite number of measurements, these estimates converge to the true parameter values in the sense of mean-square convergence. We show that this scheme is robust to unreliable communication links, and works in a network with dynamically changing topology.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This work proposes three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case, which is the first to be accompanied by a polynomial-time bound on the convergence time.
Abstract: We propose three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case. The convergence rates of our fixed-graph algorithms compare favorably with other known methods, while our algorithm for the dynamic-topology case is the first to be accompanied by a polynomial-time bound on the convergence time.

Journal ArticleDOI
TL;DR: A new class of discontinuous Galerkin methods (DG) is developed which can be seen as a compromise between standard DG and the finite element (FE) method in the way that it is explicit likestandard DG and energy conserving like FE.
Abstract: We have developed and analyzed a new class of discontinuous Galerkin methods (DG) which can be seen as a compromise between standard DG and the finite element (FE) method in the way that it is explicit like standard DG and energy conserving like FE. In the literature there are many methods that achieve some of the goals of explicit time marching, unstructured grid, energy conservation, and optimal higher order accuracy, but as far as we know only our new algorithms satisfy all the conditions. We propose a new stability requirement for our DG. The stability analysis is based on the careful selection of the two FE spaces which verify the new stability condition. The convergence rate is optimal with respect to the order of the polynomials in the FE spaces. Moreover, the convergence is described by a series of numerical experiments.

Journal ArticleDOI
TL;DR: Investigations made in this paper help to better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Abstract: This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima

Journal ArticleDOI
TL;DR: This paper establishes the convergence of a multi point flux approximation control volume method on rough quadrilateral grids where the cells are not required to approach parallelograms in the asymptotic limit.
Abstract: This paper establishes the convergence of a multi point flux approximation control volume method on rough quadrilateral grids. By rough grids we refer to a family of refined quadrilateral grids where the cells are not required to approach parallelograms in the asymptotic limit. In contrast to previous convergence results for these methods we consider here a version where the flux approximation is derived directly in the physical space, and not on a reference cell. As a consequence, less regular grids are allowed. However, the extra cost is that the symmetry of the method is lost.

Proceedings ArticleDOI
21 May 2006
TL;DR: This paper asks whether the population of agents responsible for routing the traffic can jointly compute or better learn a Wardrop equilibrium efficiently, and presents a lower bound demonstrating the necessity of adaptive sampling by showing that static sampling methods result in a slowdown that is exponential in the size of the network.
Abstract: We study rerouting policies in a dynamic round-based variant of a well known game theoretic traffic model due to Wardrop. Previous analyses (mostly in the context of selfish routing) based on Wardrop's model focus mostly on the static analysis of equilibria. In this paper, we ask the question whether the population of agents responsible for routing the traffic can jointly compute or better learn a Wardrop equilibrium efficiently. The rerouting policies that we study are of the following kind. In each round, each agent samples an alternative routing path and compares the latency on this path with its current latency. If the agent observes that it can improve its latency then it switches with some probability depending on the possible improvement to the better path.We can show various positive results based on a rerouting policy using an adaptive sampling rule that implicitly amplifies paths that carry a large amount of traffic in the Wardrop equilibrium. For general asymmetric games, we show that a simple replication protocol in which agents adopt strategies of more successful agents reaches a certain kind of bicriteria equilibrium within a time bound that is independent of the size and the structure of the network but only depends on a parameter of the latency functions, that we call the relative slope. For symmetric games, this result has an intuitive interpretation: Replication approximately satisfies almost everyone very quickly.In order to achieve convergence to a Wardrop equilibrium besides replication one also needs an exploration component discovering possibly unused strategies. We present a sampling based replication-exploration protocol and analyze its convergence time for symmetric games. For example, if the latency functions are defined by positive polynomials in coefficient representation, the convergence time is polynomial in the representation length of the latency functions. To the best of our knowledge, all previous results on the speed of convergence towards Wardrop equilibria, even when restricted to linear latency functions, were pseudopolynomial.In addition to the upper bounds on the speed of convergence, we can also present a lower bound demonstrating the necessity of adaptive sampling by showing that static sampling methods result in a slowdown that is exponential in the size of the network. A further lower bound illustrates that the relative slope is, in fact, the relevant parameter that determines the speed of convergence.

Posted Content
TL;DR: In this article, the authors consider semi-supervised classification when part of the available data is unlabeled and make an assumption relating the behavior of the regression function to that of the marginal distribution.
Abstract: We consider semi-supervised classification when part of the available data is unlabeled. These unlabeled data can be useful for the classification problem when we make an assumption relating the behavior of the regression function to that of the marginal distribution. Seeger (2000) proposed the well-known "cluster assumption" as a reasonable one. We propose a mathematical formulation of this assumption and a method based on density level sets estimation that takes advantage of it to achieve fast rates of convergence both in the number of unlabeled examples and the number of labeled examples.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: It is demonstrated that multi-hop relay protocols can enlarge the algebraic connectivity of the communication network without physically changing the network topology.
Abstract: Consensus protocols are distributed algorithms in networked multi-agent systems. Based on the local information, agents automatically converge to a common consensus state and the convergence speed is determined by the algebraic connectivity of the communication network. In order to achieve a fast consensus seeking, we propose the multi-hop relay protocols, where each agent can expand its knowledge by employing multi-hop paths in the network. We demonstrate that multi-hop relay protocols can enlarge the algebraic connectivity without physically changing the network topology. Moreover, communication delays are discussed and a tradeoff is identified between the convergence speed and the time delay sensitivity.

Journal ArticleDOI
01 Mar 2006-EPL
TL;DR: The effect of a non-trivial topology on the dynamics of the so-called Naming Game, a recently introduced model which addresses the issue of how shared conventions emerge spontaneously in a population of agents, is analyzed.
Abstract: In this paper we analyze the effect of a non-trivial topology on the dynamics of the so-called Naming Game, a recently introduced model which addresses the issue of how shared conventions emerge spontaneously in a population of agents. We consider in particular the small-world topology and study the convergence towards the global agreement as a function of the population size N as well as of the parameter p which sets the rate of rewiring leading to the small-world network. As long as p > > 1/N, there exists a crossover time scaling as N/p2 which separates an early one-dimensional–like dynamics from a late-stage mean-field–like behavior. At the beginning of the process, the local quasi–one-dimensional topology induces a coarsening dynamics which allows for a minimization of the cognitive effort (memory) required to the agents. In the late stages, on the other hand, the mean-field–like topology leads to a speed-up of the convergence process with respect to the one-dimensional case.

Journal ArticleDOI
TL;DR: A new quadratic string method for finding the minimum-energy path that eliminates the need for predetermining such parameters as step size and spring constants, and is applicable to reactions with multiple barriers.
Abstract: Based on a multiobjective optimization framework, we develop a new quadratic string method for finding the minimum-energy path. In the method, each point on the minimum-energy path is minimized by integration in the descent direction perpendicular to path. Each local integration is done on a quadratic surface approximated by a damped Broyden-Fletcher-Goldfarb-Shanno updated Hessian, allowing the algorithm to take many steps between energy and gradient calls. The integration is performed with an adaptive step-size solver, which is restricted in length to the trust radius of the approximate Hessian. The full algorithm is shown to be capable of practical superlinear convergence, in contrast to the linear convergence of other methods. The method also eliminates the need for predetermining such parameters as step size and spring constants, and is applicable to reactions with multiple barriers. The effectiveness of this method is demonstrated for the Muller-Brown potential, a seven-atom Lennard-Jones cluster, and the enolation of acetaldehyde to vinyl alcohol.

Journal ArticleDOI
TL;DR: A new level set-based partial differential equation (PDE) for the purpose of bimodal segmentation that converges to one of the two fixed values which are determined by the amount of the shifting of the Heaviside functions.
Abstract: In this paper, we propose a new level set-based partial differential equation (PDE) for the purpose of bimodal segmentation. The PDE is derived from an energy functional which is a modified version of the fitting term of the Chan-Vese model . The energy functional is designed to obtain a stationary global minimum, i.e., the level set function which evolves by the Euler-Lagrange equation of the energy functional has a unique convergence state. The existence of a global minimum makes the algorithm invariant to the initialization of the level set function, whereas the existence of a convergence state makes it possible to set a termination criterion on the algorithm. Furthermore, since the level set function converges to one of the two fixed values which are determined by the amount of the shifting of the Heaviside functions, an initialization of the level set function close to those values can result in a fast convergence

Proceedings ArticleDOI
30 Oct 2006
TL;DR: A simple alternative approach inspired by opposition-based learning that simultaneously considers each network transfer function and its opposite is presented, resulting in an improvement in convergence rate and over traditional backpropagation learning with momentum.
Abstract: The backpropagation algorithm is a very popular approach to learning in feed-forward multi-layer perceptron networks. However, in many scenarios the time required to adequately learn the task is considerable. Many existing approaches have improved the convergence rate by altering the learning algorithm. We present a simple alternative approach inspired by opposition-based learning that simultaneously considers each network transfer function and its opposite. The effect is an improvement in convergence rate and over traditional backpropagation learning with momentum. We use four common benchmark problems to illustrate the improvement in convergence time.

Proceedings ArticleDOI
25 Oct 2006
TL;DR: The measurement results show that the convergence time of route fail-over events is similar to that of new route announcements and is significantly shorter than that of route failures, which is contrary to the widely held view from previous experiments but confirms earlier analytical results.
Abstract: A number of previous measurement studies [10, 12, 17] have shown the existence of path exploration and slow convergence in the global Internet routing system, and a number of protocol enhancements have been proposed to remedy the problem [21, 15, 4, 20, 5]. However all the previous measurements were conducted over a small number of testing prefixes. There has been no systematic study to quantify the pervasiveness of BGP slow convergence in the operational Internet, nor there is any known effort to deploy any of the proposed solutions.In this paper we present our measurement results from identifying BGP slow convergence events across the entire global routing table. Our data shows that the severity of path exploration and slow convergence varies depending on where prefixes are originated and where the observations are made in the Internet routing hierarchy. In general, routers in tier-1 ISPs observe less path exploration, hence shorter convergence delays than routers in edge ASes, and prefixes originated from tier-1 ISPs also experience less path exploration than those originated from edge ASes. Our data also shows that the convergence time of route fail-over events is similar to that of new route announcements, and significantly shorter than that of route failures, which confirms our earlier analytical results [19]. In addition, we also developed a usage-time based path preference inference method which can be used by future studies of BGP dynamics.

Journal ArticleDOI
TL;DR: The proposed approach, based on the Fliess canonical form, allows observers to give an estimate of the discrete location of the system, which indicates the dynamic evolution.
Abstract: The main topic of this paper is the problem of constructing observers for switched mechanical systems, which includes, as a specific case, the design of observers based on the high order sliding mode technique. The high order sliding mode is used to overcome the chattering phenomena occurring, which induce some irrelevant and undesirable phenomena for mechanical systems. The proposed approach, based on the Fliess canonical form, also allows observers to give an estimate of the discrete location of the system, which indicates the dynamic evolution. The convergence of the observers is proved and a stick–mass–friction system is used to illustrate the efficiency of the proposed hybrid observers.

Journal ArticleDOI
TL;DR: An adaptive nonconform finite element method is developed and analyzed that provides an error reduction due to the refinement process and thus guarantees convergence of the nonconforming finite element approximations and does neither require regularity of the solution nor uses duality arguments.
Abstract: An adaptive nonconforming finite element method is developed and analyzed that provides an error reduction due to the refinement process and thus guarantees convergence of the nonconforming finite element approximations. The analysis is carried out for the lowest order Crouzeix-Raviart elements and leads to the linear convergence of an appropriate adaptive nonconforming finite element algorithm with respect to the number of refinement levels. Important tools in the convergence proof are a discrete local efficiency and a quasi-orthogonality property. The proof does neither require regularity of the solution nor uses duality arguments. As a consequence on the data control, no particular mesh design has to be monitored.