scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive algorithm published in 2016"


Journal ArticleDOI
TL;DR: A generalized correntropy that adopts the generalized Gaussian density (GGD) function as the kernel, and some important properties are presented, and an adaptive algorithm is derived and shown to be very stable and can achieve zero probability of divergence (POD).
Abstract: As a robust nonlinear similarity measure in kernel space, correntropy has received increasing attention in domains of machine learning and signal processing. In particular, the maximum correntropy criterion (MCC) has recently been successfully applied in robust regression and filtering. The default kernel function in correntropy is the Gaussian kernel, which is, of course, not always the best choice. In this paper, we propose a generalized correntropy that adopts the generalized Gaussian density (GGD) function as the kernel, and present some important properties. We further propose the generalized maximum correntropy criterion (GMCC) and apply it to adaptive filtering. An adaptive algorithm, called the GMCC algorithm, is derived, and the stability problem and steady-state performance are studied. We show that the proposed algorithm is very stable and can achieve zero probability of divergence (POD). Simulation results confirm the theoretical expectations and demonstrate the desirable performance of the new algorithm.

513 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: A new automatic and adaptive algorithm for choosing the transformations of the samples used in data augmentation, where for each sample, the main idea is to seek a small transformation that yields maximal classification loss on the transformed sample.
Abstract: Data augmentation is the process of generating samples by transforming training data, with the target of improving the accuracy and robustness of classifiers. In this paper, we propose a new automatic and adaptive algorithm for choosing the transformations of the samples used in data augmentation. Specifically, for each sample, our main idea is to seek a small transformation that yields maximal classification loss on the transformed sample. We employ a trust-region optimization strategy, which consists of solving a sequence of linear programs. Our data augmentation scheme is then integrated into a Stochastic Gradient Descent algorithm for training deep neural networks. We perform experiments on two datasets, and show that that the proposed scheme outperforms random data augmentation algorithms in terms of accuracy and robustness, while yielding comparable or superior results with respect to existing selective sampling approaches.

172 citations


Proceedings ArticleDOI
22 May 2016
TL;DR: In this paper, a dual timescale model is proposed to characterize abrupt channel changes (e.g., blockage) and slow variations of AoDs and AoAs in a typical millimeter wave channel consisting of a few dominant paths.
Abstract: Millimeter wave provides a promising approach for meeting the ever-growing traffic demand in next generation wireless networks. It is crucial to obtain the channel state information in order to perform beamforming and combining to compensate for severe path loss in this band. In contrast to lower frequencies, a typical millimeter wave channel consists of a few dominant paths. Thus it is generally sufficient to estimate the path gains, angles of departure (AoDs), and angles of arrival (AoAs) of those paths. Proposed in this paper is a dual timescale model to characterize abrupt channel changes (e.g., blockage) and slow variations of AoDs and AoAs. This work focuses on tracking the slow variations and detecting abrupt changes. A Kalman filter based tracking algorithm and an abrupt change detection method are proposed. The tracking algorithm is compared with the adaptive algorithm due to Alkhateeb, Ayach, Leus and Heath (2014) in the case with a single radio frequency chain. Simulation results show that to achieve the same tracking performance, the proposed algorithm requires much lower signal-to-noise ratio (SNR) and much fewer pilots than the other algorithm. Moreover, the change detection method can always detect abrupt changes with moderate number of pilots and SNR.

142 citations


Journal ArticleDOI
TL;DR: Simulation result shows that the controller could achieve good tracking performance with minimal learning parameter in case of actuator fault, and the minimal-learning-parameter technique is applied on the dynamics.

87 citations


Journal ArticleDOI
01 Jul 2016
TL;DR: This paper introduces an adaptive, energy-aware and distributed fault-tolerant topology-control algorithm, namely the Adaptive Disjoint Path Vector (ADPV) algorithm, for heterogeneous wireless sensor networks, and demonstrates that ADPV is superior in preserving supernode connectivity.
Abstract: This paper introduces an adaptive, energy-aware and distributed fault-tolerant topology-control algorithm, namely the Adaptive Disjoint Path Vector (ADPV) algorithm, for heterogeneous wireless sensor networks. In this heterogeneous model, we have resource-rich supernodes as well as ordinary sensor nodes that are supposed to be connected to the supernodes. Unlike the static alternative Disjoint Path Vector (DPV) algorithm, the focus of ADPV is to secure supernode connectivity in the presence of node failures, and ADPV achieves this goal by dynamically adjusting the sensor nodes' transmission powers. The ADPV algorithm involves two phases: a single initialization phase, which occurs at the beginning, and restoration phases, which are invoked each time the network's supernode connectivity is broken. Restoration phases utilize alternative routes that are computed at the initialization phase by the help of a novel optimization based on the well-known set-packing problem. Through extensive simulations, we demonstrate that ADPV is superior in preserving supernode connectivity. In particular, ADPV achieves this goal up to a failure of 95% of the sensor nodes; while the performance of DPV is limited to 5%. In turn, by our adaptive algorithm, we obtain a two-fold increase in supernode-connected lifetimes compared to DPV algorithm.

83 citations


Journal ArticleDOI
TL;DR: In this article, the position and attitude tracking of a rigid spacecraft is approached with coupled six-degrees-of-freedom dynamics described by dual quaternions by taking advantage of the compact, nonlinear, integrated relative motion dynamics, a simple proportional-derivative (PD) controller is designed at first in the absence of modeling uncertainties, and the presence of unknown mass and inertia as well as unknown constant disturbances is then taken into account.
Abstract: The position and attitude tracking of a rigid spacecraft is approached with coupled six-degrees-of-freedom dynamics described by dual quaternions By taking advantage of the compact, nonlinear, integrated relative motion dynamics, a simple proportional-derivative (PD) controller is designed at first in the absence of modeling uncertainties The presence of unknown mass and inertia as well as unknown constant disturbances is then taken into account An adaptive controller is developed by combining the PD controller with an adaptive algorithm, which provides estimations of unknown parameters and disturbances Both controllers ensure almost global asymptotic convergence of the relative pose tracking error In addition, the proposed methods are not only computationally efficient but can also reduce the control energy consumption, since the gyroscopic terms involved in the system dynamics are preserved, rather than being cancelled by feedback Numerical simulations demonstrate the effectiveness of the proposed methods

73 citations


Journal ArticleDOI
TL;DR: An adaptive gain algorithm for second-order sliding-mode control (2-SMC), specifically a super-twisting (STW)-like controller, with uniform finite/fixed convergence time, that is robust to perturbations with unknown bounds is presented.
Abstract: This paper presents an adaptive gain algorithm for second-order sliding-mode control (2-SMC), specifically a super-twisting (STW)-like controller, with uniform finite/fixed convergence time, that is robust to perturbations with unknown bounds. It is shown that a second-order sliding mode is established as exact finite-time convergence to the origin if the adaptive gain does not have the ability to get reduced and converge to a small vicinity of the origin if the adaptation algorithm does not overestimate the control gain. The estimate of fixed convergence time of the studied adaptive STW-like controller is derived based on the Lyapunov analysis. The efficacy of the proposed adaptive algorithm is illustrated in a tutorial example, where the adaptive STW-like controller with uniform finite/fixed convergence time is compared to the adaptive STW controller with non-uniform finite convergence time.

60 citations


Proceedings ArticleDOI
14 Jun 2016
TL;DR: In this paper, the authors study the problem of optimal content placement over a network of caches, and propose a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within 1-1/e factor from the optimal.
Abstract: We study the problem of optimal content placement over a network of caches, a problem naturally arising in several networking applications, including ICNs, CDNs, and P2P systems. Given a demand of content request rates and paths followed, we wish to determine the content placement that maximizes the expected caching gain, i.e., the reduction of routing costs due to intermediate caching. The offline version of this problem is NP-hard and, in general, the demand and topology may be a priori unknown. Hence, a distributed, adaptive, constant approximation content placement algorithm is desired. We show that path replication, a simple algorithm frequently encountered in literature, can be arbitrarily suboptimal when combined with traditional eviction policies, like LRU, LFU, or FIFO. We propose a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within 1-1/e factor from the optimal, in expectation. Motivated by our analysis, we also propose a novel greedy eviction policy to be used with path replication, and show through numerical evaluations that both algorithms significantly outperform path replication with traditional eviction policies over a broad array of network topologies.

60 citations


Journal ArticleDOI
TL;DR: It is demonstrated with case studies that, under all considered scenarios, implementing a pre-signal with the proposed adaptive control algorithm will result in the least average person delay at the intersection.
Abstract: In urban areas, where road space is limited, it is important to provide efficient public and private transportation systems to maximize person throughput, for example from a signalized intersection. To this end, this research looks at providing bus priority using a dedicated bus lane which is terminated upstream of the intersection, and placing an additional signal at this location, called a pre-signal. Although pre-signals are already implemented in some countries (e.g. UK, Denmark, and Switzerland), an adaptive control algorithm which responds to varying traffic demands has not yet been proposed and analyzed in the literature. This research aims to fill that gap by developing an adaptive control algorithm for pre-signals tailored to real-time private and public transportation demands. The necessary infrastructure to operate an adaptive pre-signal is established, and guidelines for implementation are provided. The relevant parameters regarding the boundary conditions for the adaptive algorithm are first determined, and then quantified for a typical case using a micro-simulation model. It is demonstrated with case studies that, under all considered scenarios, implementing a pre-signal with the proposed adaptive control algorithm will result in the least average person delay at the intersection. The algorithm is expected to function well with a wide range of car demands, bus frequencies, and bus passenger occupancies. Moreover, the algorithm is robust to errors in these input values, so exact information is not required.

59 citations


Journal ArticleDOI
TL;DR: In this article, a nonlinear adaptive algorithm based on Volterra expansion model (VFxlogLMP) is developed, which is derived by minimizing the lp -norm of logarithmic cost.

58 citations


Posted Content
TL;DR: This work proposes a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within a $1-1/e$ factor from the optimal, in expectation.
Abstract: We study the problem of optimal content placement over a network of caches, a problem naturally arising in several networking applications, including ICNs, CDNs, and P2P systems. Given a demand of content request rates and paths followed, we wish to determine the content placement that maximizes the expected caching gain, i.e., the reduction of routing costs due to intermediate caching. The offline version of this problem is NP-hard and, in general, the demand and topology may be a priori unknown. Hence, a distributed, adaptive, constant approximation content placement algorithm is desired. We show that path replication, a simple algorithm frequently encountered in literature, can be arbitrarily suboptimal when combined with traditional eviction policies, like LRU, LFU, or FIFO. We propose a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within 1-1/e factor from the optimal, in expectation. Motivated by our analysis, we also propose a novel greedy eviction policy to be used with path replication, and show through numerical evaluations that both algorithms significantly outperform path replication with traditional eviction policies over a broad array of network topologies.

Book ChapterDOI
06 Jan 2016
TL;DR: Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields and their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.
Abstract: In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

Proceedings ArticleDOI
21 Jul 2016
TL;DR: The adaptive and non-adaptive algorithms achieve the same approximation factor a the previous best algorithms of Blum etal (EC 2015) for this problem, while requiring exponentially smaller number of per-vertex queries (and rounds of adaptive queries for the adaptive algorithm).
Abstract: Motivated by an application in kidney exchange, we study the following stochastic matching problem: We are given a graph G(V, E) (not necessarily bipartite), where each edge in E is realized with some constant probability p > 0, and the goal is to find a maximum matching in the realized graph. An algorithm in this setting is allowed to make queries to edges in E to determine whether or not they are realized.We design an adaptive algorithm for this problem that, for any graph G, computes a (1−v)-approximate maximum matching in the realized graph Gp with high probability, while making O (log (1/v p)/v p) queries per vertex, where the edges to query are chosen adaptively in O (log (1/v p)/v p) rounds. We further present a non-adaptive algorithm that makes O (log (1/v p)/v p) queries per vertex and computes a (1/2−v)-approximate maximum matching in Gp with high probability.Both our adaptive and non-adaptive algorithms achieve the same approximation factor as the previous best algorithms of Blum et al. (EC 2015) for this problem, while requiring an exponentially smaller number of per-vertex queries (and rounds of adaptive queries for the adaptive algorithm). Our results settle an open problem raised by Blum et al. by achieving only a polynomial dependency on both v and p. Moreover, the approximation guarantee of our algorithms is instance-wise as opposed to only being competitive in expectation as is the case for Blum et al. This is of particular relevance to the key application of stochastic matching in kidney exchange. We obtain our results via two main techniques—namely, matching-covers and vertex sparsification—that may be of independent interest.

Journal ArticleDOI
TL;DR: An online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair, for two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics is presented.
Abstract: Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton–Jacobi–Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A novel adaptive neural network-based control scheme for the Furuta pendulum, which is a two degree-of-freedom underactuated system, is introduced, thereby proving the uniform ultimate boundedness of the error trajectories.
Abstract: The purpose of this paper is to introduce a novel adaptive neural network-based control scheme for the Furuta pendulum, which is a two degree-of-freedom underactuated system. Adaptation laws for the input and output weights are also provided. The proposed controller is able to guarantee tracking of a reference signal for the arm while the pendulum remains in the upright position. The key aspect of the derivation of the controller is the definition of an output function that depends on the position and velocity errors. The internal and external dynamics are rigorously analyzed, thereby proving the uniform ultimate boundedness of the error trajectories. By using real-time experiments, the new scheme is compared with other control methodologies, therein demonstrating the improved performance of the proposed adaptive algorithm.

Journal ArticleDOI
TL;DR: The theoretical analysis and simulation results show somewhat surprising results showing how the interelement spacing and the number of receiver antennas affect the performance of the large-scale adaptive array antennas with mutual coupling.
Abstract: In this paper, the performance of the large-scale adaptive array antennas in the presence of mutual coupling is explored. An expression for the output signal-to-interference-noise ratio (SINR) of the adaptive array in the presence of strong interference signals, taking into account the mutual coupling between the array elements, is derived. A low-complexity antenna selection algorithm is then proposed to select the optimal subset until the suitable performance is obtained with condition that the spacing between adjacent two antennas is fixed. The combined effect of reducing the distance between the antenna elements with increasing the number of elements in a very limited physical space is also investigated. The antenna array consisting of dipole antennas placed side by side in a linear pattern is assumed. The convergence speed of the adaptive algorithm by bounding and estimating the eigenvalues of its signal covariance matrix in the presence of the mutual coupling is further evaluated. Some novel theoretical study which may be very valuable for the actual deployment of the large-scale adaptive array antennas is made in comparison with previous studies. The theoretical analysis and simulation results show somewhat surprising results showing how the interelement spacing and the number of receiver antennas affect the performance of the large-scale adaptive array antennas with mutual coupling.

Journal ArticleDOI
TL;DR: In this paper, a continuous adaptive higher-order sliding mode (HOSM) control with adaptation is proposed and validated via simulations of a hypersonic missile in the terminal phase.
Abstract: Hypersonic missile control in the terminal phase is addressed using continuous adaptive higher order sliding mode AHOSM control with adaptation. The AHOSM self-tuning controller is proposed and studied. The double-layer adaptive algorithm is based on equivalent control concepts and ensures non-overestimation of the control gain to help mitigating control chattering. The proposed continuous AHOSM control is validated via simulations of a hypersonic missile in the terminal phase. The robustness and high-accuracy output tracking in the presence of matched and unmatched external disturbances and missile model uncertainties is demonstrated. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An adaptive algorithm is proposed which employs the computed error estimates and adaptive meshes to control the approximation error and the stochastic error is controlled such that the determined bounds are guaranteed in probability.
Abstract: The focus of this work is the introduction of some computable a posteriori error control to the popular multilevel Monte Carlo sampling for PDE with stochastic data. We are especially interested in applications where some quantity of interest should be estimated accurately. Based on a spatial discretization by the finite element method, a goal functional is defined which encodes the quantity of interest. The devised goal-oriented a posteriori error estimator enables one to determine guaranteed path-wise a posteriori error bounds for this quantity. An adaptive algorithm is proposed which employs the computed error estimates and adaptive meshes to control the approximation error. Moreover, the stochastic error is controlled such that the determined bounds are guaranteed in probability. The approach allows for the adaptive refinement of the mesh hierarchy used in the multilevel Monte Carlo simulation which is used for a problem-dependent construction of discretization levels. Numerical experiments illustrate...

Posted Content
TL;DR: KADABRA is a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks, and also handles more general problems, such as computing the most central nodes.
Abstract: We present KADABRA, a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm relies on two new theoretical contributions, of independent interest. The first contribution focuses on sampling shortest paths, a subroutine used by most algorithms that approximate betweenness centrality. We show that, on realistic random graph models, we can perform this task in time $|E|^{\frac{1}{2}+o(1)}$ with high probability, obtaining a significant speedup with respect to the $\Theta(|E|)$ worst-case performance. We experimentally show that this new technique achieves similar speedups on real-world complex networks, as well. The second contribution is a new rigorous application of the adaptive sampling technique. This approach decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error, and it also handles more general problems, such as computing the $k$ most central nodes. Furthermore, our analysis is general, and it might be extended to other settings.

Journal ArticleDOI
TL;DR: This article introduces and analyzes a new adaptive algorithm for solving symmetric positive definite linear systems in cases where several preconditioners are available or the usual preconditionser is a sum of contributions and it is observed to be optimal in terms of local solves.
Abstract: This article introduces and analyzes a new adaptive algorithm for solving symmetric positive definite linear systems in cases where several preconditioners are available or the usual preconditioner is a sum of contributions. A new theoretical result allows us to select, at each iteration, whether a classical preconditioned conjugate gradient (CG) iteration is sufficient (i.e., the error decreases by a factor of at least some chosen ratio) or whether convergence needs to be accelerated by performing an iteration of multipreconditioned CG [4]. This is first presented in an abstract framework with the one strong assumption being that a bound for the smallest eigenvalue of the preconditioned operator is available. Then, the algorithm is applied to the balancing domain decomposition method and its behavior is illustrated numerically. In particular, it is observed to be optimal in terms of local solves, for both well-conditioned and ill-conditioned test cases, which makes it a good candidate to be a default par...

Journal ArticleDOI
TL;DR: A new CC algorithm is proposed in which part of the available computational budget is spent for adapting both the dimensionality of subcomponents and the number of evolved individuals during the optimization process, which can outperform a state-of-the art algorithm based on adaptive equally sized decompositions.

Journal ArticleDOI
TL;DR: This letter designs an adaptive algorithm which performs sliding-window channel estimation, partial FFT combining and data detection across subcarriers iteratively and introduces a new parameter “residual ICI span” to counteract the post-combining ICI and provide a better system performance.
Abstract: Partial FFT demodulation is a newly-emerging technique to mitigate the inter-carrier interference (ICI) of orthogonal frequency division multiplexing (OFDM) systems over time-varying underwater acoustic channels. In this letter, we extend the partial FFT demodulation method for a single-input single-output (SISO) configuration to the multiple-input multiple-output (MIMO) case. By assuming no channel knowledge, we design an adaptive algorithm which performs sliding-window channel estimation, partial FFT combining and data detection across subcarriers iteratively. Furthermore, a new parameter “residual ICI span” is introduced to counteract the post-combining ICI and provide a better system performance.

Posted Content
TL;DR: In this paper, the authors design and analyze algorithms for online linear optimization that have optimal regret and at the same time do not need to know any upper or lower bounds on the norm of the loss vectors.
Abstract: We design and analyze algorithms for online linear optimization that have optimal regret and at the same time do not need to know any upper or lower bounds on the norm of the loss vectors. Our algorithms are instances of the Follow the Regularized Leader (FTRL) and Mirror Descent (MD) meta-algorithms. We achieve adaptiveness to the norms of the loss vectors by scale invariance, i.e., our algorithms make exactly the same decisions if the sequence of loss vectors is multiplied by any positive constant. The algorithm based on FTRL works for any decision set, bounded or unbounded. For unbounded decisions sets, this is the first adaptive algorithm for online linear optimization with a non-vacuous regret bound. In contrast, we show lower bounds on scale-free algorithms based on MD on unbounded domains.

Journal ArticleDOI
TL;DR: A robust diffusion estimation algorithms using robust function like Huber's cost function and error saturation non linearity are proposed here in order to make the network fault tolerant and the result shows that the fault tolerant distributed estimation method is robust to node failure.

Journal ArticleDOI
TL;DR: An adaptive algorithm is formulated which steers the local mesh-refinement and the multiplicity of the knots and shows that the proposed adaptive strategy leads to optimal convergence, and related IGA boundary element methods are superior to standard boundaryelement methods with piecewise polynomials.
Abstract: We derive and discuss a posteriori error estimators for Galerkin and collocation IGA boundary element methods for weakly singular integral equations of the first-kind in 2D. While recent own work considered the Faermann residual error estimator for Galerkin IGA boundary element methods, the present work focuses more on collocation and weighted-residual error estimators, which provide reliable upper bounds for the energy error. Our analysis allows piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. We formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments show that the proposed adaptive strategy leads to optimal convergence, and related IGA boundary element methods are superior to standard boundary element methods with piecewise polynomials.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss two adaptive methods based on the step-doubling technique, which are, in many cases, immensely faster than the corresponding standard method with fixed timesteps and allow a tolerance level to be set for the numerical errors that turns out to be a good indicator of the actual errors.
Abstract: The computation time required by standard finite difference methods with fixed timesteps for solving fractional diffusion equations is usually very large because the number of operations required to find the solution scales as the square of the number of timesteps. Besides, the solutions of these problems usually involve markedly different time scales, which leads to quite inhomogeneous numerical errors. A natural way to address these difficulties is by resorting to adaptive numerical methods where the size of the timesteps is chosen according to the behaviour of the solution. A key feature of these methods is then the efficiency of the adaptive algorithm employed to dynamically set the size of every timestep. Here we discuss two adaptive methods based on the step-doubling technique. These methods are, in many cases, immensely faster than the corresponding standard method with fixed timesteps and they allow a tolerance level to be set for the numerical errors that turns out to be a good indicator of the actual errors.

Journal ArticleDOI
TL;DR: In this article, the authors revisited and modifies the use of various components of the simple adaptive control approach and showed how one can use passivity concepts such that, while it maintains robustness with disturbances, it also allows asymptotically perfect tracking in ideal conditions.
Abstract: Adaptive controllers have been developed to guarantee stability and asymptotically perfect tracking under ideal conditions. In particular, the simple adaptive control methodology has been developed to avoid the use of identifiers, observer-based controllers and, in general, to avoid using large order adaptive controllers in the control loop. In spite of initially successful applications, it is known that the basic adaptive algorithm may lead to divergence of the adaptive gains in such non-ideal conditions as the presence of disturbances. A sigma-term adjustment has been used to maintain boundedness with disturbance, yet is known that it seems to also eliminate perfect following even in ideal situations. Furthermore, bursting and other chaotic-like phenomena observed in connection with the sigma-term may also give pause to potential users of adaptive control. This paper revisits and modifies the use of various components of the simple adaptive control approach and shows how one can use passivity concepts such that, while it maintains robustness with disturbances, it also allows asymptotically perfect tracking in ideal conditions. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The agPCE method was indeed able to perform UQ and SA at a significantly lower computational cost than the gPCE, while still retaining accurate results, and ranged between 70-80% and 50-90% for the AAA and AVF model, respectively.
Abstract: When applying models to patient-specific situations, the impact of model input uncertainty on the model output uncertainty has to be assessed. Proper uncertainty quantification (UQ) and sensitivity analysis (SA) techniques are indispensable for this purpose. An efficient approach for UQ and SA is the generalized polynomial chaos expansion (gPCE) method, where model response is expanded into a finite series of polynomials that depend on the model input (i.e., a meta-model). However, because of the intrinsic high computational cost of three-dimensional (3D) cardiovascular models, performing the number of model evaluations required for the gPCE is often computationally prohibitively expensive. Recently, Blatman and Sudret (2010, "An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis," Probab. Eng. Mech., 25(2), pp. 183-197) introduced the adaptive sparse gPCE (agPCE) in the field of structural engineering. This approach reduces the computational cost with respect to the gPCE, by only including polynomials that significantly increase the meta-model's quality. In this study, we demonstrate the agPCE by applying it to a 3D abdominal aortic aneurysm (AAA) wall mechanics model and a 3D model of flow through an arteriovenous fistula (AVF). The agPCE method was indeed able to perform UQ and SA at a significantly lower computational cost than the gPCE, while still retaining accurate results. Cost reductions ranged between 70-80% and 50-90% for the AAA and AVF model, respectively.

Proceedings Article
01 Jan 2016
TL;DR: Ada Newton as discussed by the authors is an adaptive algorithm that uses Newton's method with adaptive sample sizes to increase the size of the training set by a factor larger than one in a way that the minimization variable for the current training set is in the local neighborhood of the optimal argument of the next training set.
Abstract: We consider empirical risk minimization for large-scale datasets. We introduce Ada Newton as an adaptive algorithm that uses Newton's method with adaptive sample sizes. The main idea of Ada Newton is to increase the size of the training set by a factor larger than one in a way that the minimization variable for the current training set is in the local neighborhood of the optimal argument of the next training set. This allows to exploit the quadratic convergence property of Newton's method and reach the statistical accuracy of each training set with only one iteration of Newton's method. We show theoretically that we can iteratively increase the sample size while applying single Newton iterations without line search and staying within the statistical accuracy of the regularized empirical risk. In particular, we can double the size of the training set in each iteration when the number of samples is sufficiently large. Numerical experiments on various datasets confirm the possibility of increasing the sample size by factor 2 at each iteration which implies that Ada Newton achieves the statistical accuracy of the full training set with about two passes over the dataset.

Journal ArticleDOI
TL;DR: A heuristic-based approach to self-adapt and reconfigures the wake-up schedule of the nodes in wireless body area networks (WBANs) and a latency-energy-optimized traffic-aware dynamic medium access control protocol is presented.
Abstract: This paper presents a heuristic-based approach to self-adapt and reconfigures the wake-up schedule of the nodes in wireless body area networks (WBANs). A latency-energy-optimized traffic-aware dynamic medium access control protocol is presented. The protocol is based on an adaptive algorithm that allows the sensor nodes to adapt their wake-up and sleep patterns efficiently in static and dynamic traffic variations. The heuristic approach helps to characterize the algorithmic parameters with an objective to investigate the behavior of the convergence patterns of the WBAN nodes in a non-linear system. An open-loop form is developed by keeping the wake-up interval ( $I_{\textrm {wu}}$ ) fixed followed by the closed-loop adaptive system which updates $I_{\textrm {wu}}$ on every wake-up instant. An exhaustive search is conducted for different initial wake-up interval values which show that (on average) the algorithm parameters behave monotonically in open-loop systems, whereas a decaying function in the closed-loop form. Various performance metrics, such as energy consumption, packet delay, packet delivery ratio, and convergence speed (for reaching a steady state), are evaluated. It is observed that the convergence time varies from 8 to 72 s under fixed packet transmission rate, whereas the algorithm re-converges (within 8 s) whenever the transmission rate changes.