scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 2006"


Journal ArticleDOI
TL;DR: The normalized and signed gradient dynamical systems associated with a differentiable function are characterized and their asymptotic convergence properties are identified and conditions that guarantee finite-time convergence are identified.

779 citations


01 Feb 2006
TL;DR: This document describes a method by which a Service Provider may use an IP backbone to provide IP Virtual Private Networks (VPNs) for its customers using a "peer model", in which the customers' edge routers send their routes to the Service Provider's edge routers (PE routers).
Abstract: This document describes a method by which a Service Provider may use an IP backbone to provide IP Virtual Private Networks (VPNs) for its customers. This method uses a "peer model", in which the customers' edge routers (CE routers) send their routes to the Service Provider's edge routers (PE routers); there is no "overlay" visible to the customer's routing algorithm, and CE routers at different sites do not peer with each other. Data packets are tunneled through the backbone, so that the core routers do not need to know the VPN routes. [STANDARDS-TRACK]

463 citations


Proceedings ArticleDOI
01 Dec 2006
TL;DR: It is discovered that the more complex proportional-integral algorithm has performance benefits over the simpler proportional algorithm.
Abstract: We analyze two different estimation algorithms for dynamic average consensus in sensing and communication networks, a proportional algorithm and a proportional-integral algorithm. We investigate the stability properties of these estimators under changing inputs and network topologies as well as their convergence properties under constant or slowly-varying inputs. In doing so, we discover that the more complex proportional-integral algorithm has performance benefits over the simpler proportional algorithm

448 citations


Journal ArticleDOI
TL;DR: This paper analyzes these new methods for symmetric positive definite problems and shows their relation to other modern domain decomposition methods like the new Finite Element Tearing and Interconnect (FETI) variants.
Abstract: Optimized Schwarz methods are a new class of Schwarz methods with greatly enhanced convergence properties. They converge uniformly faster than classical Schwarz methods and their convergence rates dare asymptotically much better than the convergence rates of classical Schwarz methods if the overlap is of the order of the mesh parameter, which is often the case in practical applications. They achieve this performance by using new transmission conditions between subdomains which greatly enhance the information exchange between subdomains and are motivated by the physics of the underlying problem. We analyze in this paper these new methods for symmetric positive definite problems and show their relation to other modern domain decomposition methods like the new Finite Element Tearing and Interconnect (FETI) variants.

446 citations


Proceedings ArticleDOI
01 Sep 2006
TL;DR: In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters and a theoretical framework is introduced to be able to understand and explain the differences between the resamplings algorithms.
Abstract: In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity.

439 citations


Journal ArticleDOI
TL;DR: A PDE-based level set method that needs to minimize a smooth convex functional under a quadratic constraint, and shows numerical results using the method for segmentation of digital images.
Abstract: In this paper, we propose a PDE-based level set method. Traditionally, interfaces are represented by the zero level set of continuous level set functions. Instead, we let the interfaces be represented by discontinuities of piecewise constant level set functions. Each level set function can at convergence only take two values, i.e., it can only be 1 or -1; thus, our method is related to phase-field methods. Some of the properties of standard level set methods are preserved in the proposed method, while others are not. Using this new method for interface problems, we need to minimize a smooth convex functional under a quadratic constraint. The level set functions are discontinuous at convergence, but the minimization functional is smooth. We show numerical results using the method for segmentation of digital images.

382 citations


Journal ArticleDOI
TL;DR: In this paper, the radial basis functions (RBFs) in scattered data fitting and function approximation are incorporated into the conventional level set methods to construct a more efficient approach for structural topology optimization.
Abstract: Level set methods have become an attractive design tool in shape and topology optimization for obtaining lighter and more efficient structures. In this paper, the popular radial basis functions (RBFs) in scattered data fitting and function approximation are incorporated into the conventional level set methods to construct a more efficient approach for structural topology optimization. RBF implicit modelling with multiquadric (MQ) splines is developed to define the implicit level set function with a high level of accuracy and smoothness. A RBF-level set optimization method is proposed to transform the Hamilton-Jacobi partial differential equation (PDE) into a system of ordinary differential equations (ODEs) over the entire design domain by using a collocation formulation of the method of lines. With the mathematical convenience, the original time dependent initial value problem is changed to an interpolation problem for the initial values of the generalized expansion coefficients. A physically meaningful and efficient extension velocity method is presented to avoid possible problems without reinitialization in the level set methods. The proposed method is implemented in the framework of minimum compliance design that has been extensively studied in topology optimization and its efficiency and accuracy over the conventional level set methods are highlighted. Numerical examples show the success of the present RBF-level set method in the accuracy, convergence speed and insensitivity to initial designs in topology optimization of two-dimensional (2D) structures. It is suggested that the introduction of the radial basis functions to the level set methods can be promising in structural topology optimization.

295 citations


Journal ArticleDOI
11 Aug 2006
TL;DR: A multi-path inter-domain routing protocol called MIRO is presented that offers substantial flexiility, while giving transit domains control over the flow of traffic through their infrastructure and avoiding state explosion in disseminating reachability information.
Abstract: The Internet consists of thousands of independent domains with different, and sometimes competing, business interests. However, the current interdomain routing protocol (BGP) limits each router to using a single route for each destination prefix, which may not satisfy the diverse requirements of end users. Recent proposals for source routing offer an alternative where end hosts or edge routers select the end-to-end paths. However, source routing leaves transit domains with very little control and introduces difficult scalability and security challenges. In this paper, we present a multi-path inter-domain routing protocol called MIRO that offers substantial flexiility, while giving transit domains control over the flow of traffic through their infrastructure and avoiding state explosion in disseminating reachability information. In MIRO, routers learn default routes through the existing BGP protocol, and arbitrary pairs of domains can negotiate the use of additional paths (bound to tunnels in the data plane) tailored to their special needs. MIRO retains the simplicity of BGP for most traffic, and remains backwards compatible with BGP to allow for incremental deployability. Experiments with Internet topology and routing data illustrate that MIRO offers tremendous flexibility for path selection with reasonable overhead.

290 citations


Journal ArticleDOI
TL;DR: A general result is proved on global exponential convergence toward a unique equilibrium point of the neural network solutions by means of the Lyapunov-like approach and new results on global convergence in finite time are established.

286 citations


Journal ArticleDOI
TL;DR: A fully derivative-free spectral residual method for solving largescale nonlinear systems of equations that uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonothone behavior.
Abstract: A fully derivative-free spectral residual method for solving largescale nonlinear systems of equations is presented. It uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonotone behavior. The global convergence analysis of the combined scheme is presented. An extensive set of numerical experiments that indicate that the new combination is competitive and frequently better than well-known Newton-Krylov methods for largescale problems is also presented.

275 citations


Journal ArticleDOI
TL;DR: This correspondence addresses the problem of locating an acoustic source using a sensor network in a distributed manner without transmitting the full data set to a central point for processing by applying a distributed version of the projection-onto-convex-sets (POCS) method.
Abstract: This correspondence addresses the problem of locating an acoustic source using a sensor network in a distributed manner, i.e., without transmitting the full data set to a central point for processing. This problem has been traditionally addressed through the maximum-likelihood framework or nonlinear least squares. These methods, even though asymptotically optimal under certain conditions, pose a difficult global optimization problem. It is shown that the associated objective function may have multiple local optima and saddle points, and hence any local search method might stagnate at a suboptimal solution. In this correspondence, we formulate the problem as a convex feasibility problem and apply a distributed version of the projection-onto-convex-sets (POCS) method. We give a closed-form expression for the projection phase, which usually constitutes the heaviest computational aspect of POCS. Conditions are given under which, when the number of samples increases to infinity or in the absence of measurement noise, the convex feasibility problem has a unique solution at the true source location. In general, the method converges to a limit point or a limit cycle in the neighborhood of the true location. Simulation results show convergence to the global optimum with extremely fast convergence rates compared to the previous methods

Journal ArticleDOI
TL;DR: This work provides a convergence analysis for widely used registration algorithms such as ICP, using either closest points or tangent planes at closest points and for a recently developed approach based on quadratic approximants of the squared distance function.
Abstract: The computation of a rigid body transformation which optimally aligns a set of measurement points with a surface and related registration problems are studied from the viewpoint of geometry and optimization. We provide a convergence analysis for widely used registration algorithms such as ICP, using either closest points (Besl and McKay, 1992) or tangent planes at closest points (Chen and Medioni, 1991) and for a recently developed approach based on quadratic approximants of the squared distance function (Pottmann et al., 2004). ICP based on closest points exhibits local linear convergence only. Its counterpart which minimizes squared distances to the tangent planes at closest points is a Gauss---Newton iteration; it achieves local quadratic convergence for a zero residual problem and--if enhanced by regularization and step size control--comes close to quadratic convergence in many realistic scenarios. Quadratically convergent algorithms are based on the approach in (Pottmann et al., 2004). The theoretical results are supported by a number of experiments; there, we also compare the algorithms with respect to global convergence behavior, stability and running time.

Journal ArticleDOI
TL;DR: The paper considers the objective of optimally specifying redundant control effectors under constraints, a problem commonly referred to as control allocation, posed as a mixed /spl lscr//sub 2/-norm optimization objective and converted to a quadratic programming formulation.
Abstract: The paper considers the objective of optimally specifying redundant control effectors under constraints, a problem commonly referred to as control allocation. The problem is posed as a mixed /spl lscr//sub 2/-norm optimization objective and converted to a quadratic programming formulation. The implementation of an interior-point algorithm is presented. Alternative methods including fixed-point and active set methods are used to evaluate the reliability, accuracy and efficiency of a primal-dual interior-point method. While the computational load of the interior-point method is found to be greater for problems of small size, convergence to the optimal solution is also more uniform and predictable. In addition, the properties of the algorithm scale favorably with problem size.

Journal ArticleDOI
TL;DR: This paper reviews the literature on deterministic and stochastic stepsize rules, and derives formulas for optimal stepsizes for minimizing estimation error, and an approximation is proposed for the case where the parameters are unknown.
Abstract: We address the problem of determining optimal stepsizes for estimating parameters in the context of approximate dynamic programming. The sufficient conditions for convergence of the stepsize rules have been known for 50 years, but practical computational work tends to use formulas with parameters that have to be tuned for specific applications. The problem is that in most applications in dynamic programming, observations for estimating a value function typically come from a data series that can be initially highly transient. The degree of transience affects the choice of stepsize parameters that produce the fastest convergence. In addition, the degree of initial transience can vary widely among the value function parameters for the same dynamic program. This paper reviews the literature on deterministic and stochastic stepsize rules, and derives formulas for optimal stepsizes for minimizing estimation error. This formula assumes certain parameters are known, and an approximation is proposed for the case where the parameters are unknown. Experimental work shows that the approximation provides faster convergence than other popular formulas.

Journal ArticleDOI
TL;DR: Methods for improving efficiency through the use of partially converged computational fluid dynamics results allow surrogate models to be built in a fraction of the time required for models based on converged results.
Abstract: Efficient methods for global aerodynamic optimization using computational fluid dynamics simulations should aim to reduce both the time taken to evaluate design concepts and the number of evaluations needed for optimization. This paper investigates methods for improving such efficiency through the use of partially converged computational fluid dynamics results. These allow surrogate models to be built in a fraction of the time required for models based on converged results. The proposed optimization methodologies increase the speed of convergence to a global optimum while the computer resources expended in areas of poor designs are reduced. A strategy which combines a global approximation built using partially converged simulations with expected improvement updates of converged simulations is shown to outperform a traditional surrogate-based optimization.

Journal ArticleDOI
TL;DR: This paper shows that the good convergence properties of the one-unit case are also shared by the full algorithm with symmetrical normalization and the global behavior is illustrated numerically for two sources and two mixtures in several typical cases.
Abstract: The fast independent component analysis (FastICA) algorithm is one of the most popular methods to solve problems in ICA and blind source separation. It has been shown experimentally that it outperforms most of the commonly used ICA algorithms in convergence speed. A rigorous local convergence analysis has been presented only for the so-called one-unit case, in which just one of the rows of the separating matrix is considered. However, in the FastICA algorithm, there is also an explicit normalization step, and it may be questioned whether the extra rotation caused by the normalization will affect the convergence speed. The purpose of this paper is to show that this is not the case and the good convergence properties of the one-unit case are also shared by the full algorithm with symmetrical normalization. A local convergence analysis is given for the general case, and the global behavior is illustrated numerically for two sources and two mixtures in several typical cases

Proceedings ArticleDOI
19 Apr 2006
TL;DR: This work proposes a space-time diffusion scheme that relies only on peer-to-peer communication, and allows every node to asymptotically compute the global maximum-likelihood estimate of the unknown parameters of a sensor network, corrupted by independent Gaussian noises.
Abstract: We consider a sensor network in which each sensor takes measurements, at various times, of some unknown parameters, corrupted by independent Gaussian noises. Each node can take a finite or infinite number of measurements, at arbitrary times (i.e., asynchronously). We propose a space-time diffusion scheme that relies only on peer-to-peer communication, and allows every node to asymptotically compute the global maximum-likelihood estimate of the unknown parameters. At each iteration, information is diffused across the network by a temporal update step and a spatial update step. Both steps update each node's state by a weighted average of its current value and locally available data: new measurements for the time update, and neighbors' data for the spatial update. At any time, any node can compute a local weighted least-squares estimate of the unknown parameters, which converges to the global maximum-likelihood solution. With an infinite number of measurements, these estimates converge to the true parameter values in the sense of mean-square convergence. We show that this scheme is robust to unreliable communication links, and works in a network with dynamically changing topology.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This work proposes three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case, which is the first to be accompanied by a polynomial-time bound on the convergence time.
Abstract: We propose three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case. The convergence rates of our fixed-graph algorithms compare favorably with other known methods, while our algorithm for the dynamic-topology case is the first to be accompanied by a polynomial-time bound on the convergence time.

Journal ArticleDOI
TL;DR: A new class of discontinuous Galerkin methods (DG) is developed which can be seen as a compromise between standard DG and the finite element (FE) method in the way that it is explicit likestandard DG and energy conserving like FE.
Abstract: We have developed and analyzed a new class of discontinuous Galerkin methods (DG) which can be seen as a compromise between standard DG and the finite element (FE) method in the way that it is explicit like standard DG and energy conserving like FE. In the literature there are many methods that achieve some of the goals of explicit time marching, unstructured grid, energy conservation, and optimal higher order accuracy, but as far as we know only our new algorithms satisfy all the conditions. We propose a new stability requirement for our DG. The stability analysis is based on the careful selection of the two FE spaces which verify the new stability condition. The convergence rate is optimal with respect to the order of the polynomials in the FE spaces. Moreover, the convergence is described by a series of numerical experiments.

Journal ArticleDOI
TL;DR: Investigations made in this paper help to better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Abstract: This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima

Journal ArticleDOI
TL;DR: An improved iterative linear matrix inequality (ILMI) algorithm for static output feedback (SOF) stabilization problem without introducing any additional variables is proposed and extended to solve the SOF Hinfin controller design problem.
Abstract: An improved iterative linear matrix inequality (ILMI) algorithm for static output feedback (SOF) stabilization problem without introducing any additional variables is proposed in this note. The proposed ILMI algorithm is also extended to solve the SOF Hinfin controller design problem. They are applied to the multivariable PID controllers. Numerical examples show that the proposed algorithms yield better results and faster convergence than the existing ones

Journal ArticleDOI
TL;DR: In this paper, the authors modify an iterative method of Mann's type introduced by Nakajo and Takahashi and prove strong convergence of their modified Mann's iteration processes for asymptotically nonexpansive mappings and semigroups.
Abstract: The Mann iterations for nonexpansive mappings have in general only weak convergence in a Hilbert space. We modify an iterative method of Mann's type introduced by Nakajo and Takahashi [Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups, J. Math. Anal. Appl. 279 (2003) 372–379] for nonexpansive mappings and prove strong convergence of our modified Mann's iteration processes for asymptotically nonexpansive mappings and semigroups.

Journal ArticleDOI
TL;DR: In this paper, the authors present a method for simulating the evolution of HII regions driven by point sources of ionizing radiation in magnetohydrodynamic media, implemented in the three-dimensional Athena MHD code.
Abstract: We present a method for simulating the evolution of HII regions driven by point sources of ionizing radiation in magnetohydrodynamic media, implemented in the three-dimensional Athena MHD code. We compare simulations using our algorithm to analytic solutions and show that the method passes rigorous tests of accuracy and convergence. The tests reveal several conditions that an ionizing radiation-hydrodynamic code must satisfy to reproduce analytic solutions. As a demonstration of our new method, we present the first three-dimensional, global simulation of an HII region expanding into a magnetized gas. The simulation shows that magnetic fields suppress sweeping up of gas perpendicular to magnetic field lines, leading to small density contrasts and extremely weak shocks at the leading edge of the HII region's expanding shell.

Journal ArticleDOI
TL;DR: This paper establishes the convergence of a multi point flux approximation control volume method on rough quadrilateral grids where the cells are not required to approach parallelograms in the asymptotic limit.
Abstract: This paper establishes the convergence of a multi point flux approximation control volume method on rough quadrilateral grids. By rough grids we refer to a family of refined quadrilateral grids where the cells are not required to approach parallelograms in the asymptotic limit. In contrast to previous convergence results for these methods we consider here a version where the flux approximation is derived directly in the physical space, and not on a reference cell. As a consequence, less regular grids are allowed. However, the extra cost is that the symmetry of the method is lost.

Proceedings ArticleDOI
21 May 2006
TL;DR: This paper asks whether the population of agents responsible for routing the traffic can jointly compute or better learn a Wardrop equilibrium efficiently, and presents a lower bound demonstrating the necessity of adaptive sampling by showing that static sampling methods result in a slowdown that is exponential in the size of the network.
Abstract: We study rerouting policies in a dynamic round-based variant of a well known game theoretic traffic model due to Wardrop. Previous analyses (mostly in the context of selfish routing) based on Wardrop's model focus mostly on the static analysis of equilibria. In this paper, we ask the question whether the population of agents responsible for routing the traffic can jointly compute or better learn a Wardrop equilibrium efficiently. The rerouting policies that we study are of the following kind. In each round, each agent samples an alternative routing path and compares the latency on this path with its current latency. If the agent observes that it can improve its latency then it switches with some probability depending on the possible improvement to the better path.We can show various positive results based on a rerouting policy using an adaptive sampling rule that implicitly amplifies paths that carry a large amount of traffic in the Wardrop equilibrium. For general asymmetric games, we show that a simple replication protocol in which agents adopt strategies of more successful agents reaches a certain kind of bicriteria equilibrium within a time bound that is independent of the size and the structure of the network but only depends on a parameter of the latency functions, that we call the relative slope. For symmetric games, this result has an intuitive interpretation: Replication approximately satisfies almost everyone very quickly.In order to achieve convergence to a Wardrop equilibrium besides replication one also needs an exploration component discovering possibly unused strategies. We present a sampling based replication-exploration protocol and analyze its convergence time for symmetric games. For example, if the latency functions are defined by positive polynomials in coefficient representation, the convergence time is polynomial in the representation length of the latency functions. To the best of our knowledge, all previous results on the speed of convergence towards Wardrop equilibria, even when restricted to linear latency functions, were pseudopolynomial.In addition to the upper bounds on the speed of convergence, we can also present a lower bound demonstrating the necessity of adaptive sampling by showing that static sampling methods result in a slowdown that is exponential in the size of the network. A further lower bound illustrates that the relative slope is, in fact, the relevant parameter that determines the speed of convergence.

Posted Content
TL;DR: In this article, the authors consider semi-supervised classification when part of the available data is unlabeled and make an assumption relating the behavior of the regression function to that of the marginal distribution.
Abstract: We consider semi-supervised classification when part of the available data is unlabeled. These unlabeled data can be useful for the classification problem when we make an assumption relating the behavior of the regression function to that of the marginal distribution. Seeger (2000) proposed the well-known "cluster assumption" as a reasonable one. We propose a mathematical formulation of this assumption and a method based on density level sets estimation that takes advantage of it to achieve fast rates of convergence both in the number of unlabeled examples and the number of labeled examples.

Journal ArticleDOI
TL;DR: A variant of the restarted refined Arnoldi method is proposed, which does not involve Ritz value computations, and hence techniques based on matrix-vector products must be applied.
Abstract: We consider the problem of computing PageRank. The matrix involved is large and cannot be factored, and hence techniques based on matrix-vector products must be applied. A variant of the restarted refined Arnoldi method is proposed, which does not involve Ritz value computations. Numerical examples illustrate the performance and convergence behavior of the algorithm.

Journal ArticleDOI
TL;DR: An Eulerian network model for air traffic flow in the National Airspace System is developed and used to design flow control schemes which could be used by Air Traffic Controllers to optimize traffic flow.
Abstract: An Eulerian network model for air traffic flow in the National Airspace System is developed and used to design flow control schemes which could be used by Air Traffic Controllers to optimize traffic flow. The model relies on a modified version of the Lighthill-Whitham-Richards (LWR) partial differential equation (PDE), which contains a velocity control term inside the divergence operator. This PDE can be related to aircraft count, which is a key metric in air traffic control. An analytical solution to the LWR PDE is constructed for a benchmark problem, to assess the gridsize required to compute a numerical solution at a prescribed accuracy. The Jameson-Schmidt-Turkel (JST) scheme is selected among other numerical schemes to perform simulations, and evidence of numerical convergence is assessed against this analytical solution. Linear numerical schemes are discarded because of their poor performance. The model is validated against actual air traffic data (ETMS data), by showing that the Eulerian description enables good aircraft count predictions, provided a good choice of numerical parameters is made. This model is then embedded as the key constraint in an optimization problem, that of maximizing the throughput at a destination airport while maintaining aircraft density below a legal threshold in a set of sectors of the airspace. The optimization problem is solved by constructing the adjoint problem of the linearized network control problem, which provides an explicit formula for the gradient. Constraints are enforced using a logarithmic barrier. Simulations of actual air traffic data and control scenarios involving several airports between Chicago and the U.S. East Coast demonstrate the feasibility of the method

Journal ArticleDOI
TL;DR: STMD shows a superior ability to find local minima in proteins and new global minima are found for the 55 bead AB model in two and three dimensions and Calculations of the occupation probabilities of individual protein inherent structures provide new insights into folding and misfolding.
Abstract: A simulation method is presented that achieves a flat energy distribution by updating the statistical temperature instead of the density of states in Wang-Landau sampling. A novel molecular dynamics algorithm (STMD) applicable to complex systems and a Monte Carlo algorithm are developed from this point of view. Accelerated convergence for large energy bins, essential for large systems, is demonstrated in tests on the Ising model, the Lennard-Jones fluid, and bead models of proteins. STMD shows a superior ability to find local minima in proteins and new global minima are found for the 55 bead AB model in two and three dimensions. Calculations of the occupation probabilities of individual protein inherent structures provide new insights into folding and misfolding.

Journal ArticleDOI
TL;DR: A simple mechanosensory model for the task-level dynamics of wall following of the American cockroach is presented, which predicts that stabilizing neural feedback requires both proportional feedback and derivative feedback information from the antenna.
Abstract: The American cockroach, Periplaneta americana, is reported to follow walls at a rate of up to 25 turns s(-1). During high-speed wall following, a cockroach holds its antenna relatively still at the base while the flagellum bends in response to upcoming protrusions. We present a simple mechanosensory model for the task-level dynamics of wall following. In the model a torsional, mass-damper system describes the cockroach's turning dynamics, and a simplified antenna measures distance from the cockroach's centerline to a wall. The model predicts that stabilizing neural feedback requires both proportional feedback (difference between the actual and desired distance to wall) and derivative feedback (velocity of wall convergence) information from the antenna. To test this prediction, we fit a closed-loop proportional-derivative control model to trials in which blinded cockroaches encountered an angled wall (30 degrees or 45 degrees ) while running. We used the average state of the cockroach in each of its first four strides after first contacting the angled wall to predict the state in each subsequent stride. Nonlinear statistical regression provided best-fit model parameters. We rejected the hypothesis that proportional feedback alone was sufficient. A derivative (velocity) feedback term in the control model was necessary for stability.