scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 2020"


Proceedings Article
01 Dec 2020
TL;DR: This paper provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency and proposes FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
Abstract: In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated heterogeneous optimization algorithms. It subsumes previously proposed methods such as FedAvg and FedProx and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.

468 citations


Journal ArticleDOI
TL;DR: This is the first theoretical work that shows the consistency of PINNs, and it is shown that the sequence of minimizers strongly converges to the PDE solution in $C^0$.
Abstract: Physics informed neural networks (PINNs) are deep learning based techniques for solving partial differential equations (PDEs) encounted in computational science and engineering. Guided by data and physical laws, PINNs find a neural network that approximates the solution to a system of PDEs. Such a neural network is obtained by minimizing a loss function in which any prior knowledge of PDEs and data are encoded. Despite its remarkable empirical success in one, two or three dimensional problems, there is little theoretical justification for PINNs. As the number of data grows, PINNs generate a sequence of minimizers which correspond to a sequence of neural networks. We want to answer the question: Does the sequence of minimizers converge to the solution to the PDE? We consider two classes of PDEs: linear second-order elliptic and parabolic. By adapting the Schauder approach and the maximum principle, we show that the sequence of minimizers strongly converges to the PDE solution in $C^0$. Furthermore, we show that if each minimizer satisfies the initial/boundary conditions, the convergence mode becomes $H^1$. Computational examples are provided to illustrate our theoretical findings. To the best of our knowledge, this is the first theoretical work that shows the consistency of PINNs.

153 citations


Journal ArticleDOI
20 Apr 2020
TL;DR: This paper empirically shows that the Conditional Value-at-Risk as an aggregation function leads to faster convergence to better solutions for all combinatorial optimization problems tested in this study.
Abstract: Hybrid quantum/classical variational algorithms can be implemented on noisy intermediate-scale quantum computers and can be used to find solutions for combinatorial optimization problems. Approaches discussed in the literature minimize the expectation of the problem Hamiltonian for a parameterized trial quantum state. The expectation is estimated as the sample mean of a set of measurement outcomes, while the parameters of the trial state are optimized classically. This procedure is fully justified for quantum mechanical observables such as molecular energies. In the case of classical optimization problems, which yield diagonal Hamiltonians, we argue that aggregating the samples in a different way than the expected value is more natural. In this paper we propose the Conditional Value-at-Risk as an aggregation function. We empirically show - using classical simulation as well as real quantum hardware - that this leads to faster convergence to better solutions for all combinatorial optimization problems tested in our study. We also provide analytical results to explain the observed difference in performance between different variational algorithms.

135 citations


Journal ArticleDOI
TL;DR: The sufficient condition for Lyapunov stability is also given for this arbitrary chosen time stable system and the efficacy of the proposed method is illustrated through a practical system, viz., magnetic suspension system.

122 citations


Journal ArticleDOI
TL;DR: A novel computational paradigm based on Morlet wavelet neural network optimized with integrated strength of genetic algorithm (GAs) and Interior-point algorithm (IPA) is presented for solving second order Lane–Emden equation.

116 citations


Journal ArticleDOI
11 May 2020
TL;DR: In this paper, an individual Coupled Adaptive Number of Shots (iCANS) optimizer is proposed for variational hybrid quantum-classical algorithms (VHQCAs) that frugally selects the number of measurements (i.e., number of shots) both for a given iteration and for the given partial derivative in a stochastic gradient descent.
Abstract: Variational hybrid quantum-classical algorithms (VHQCAs) have the potential to be useful in the era of near-term quantum computing. However, recently there has been concern regarding the number of measurements needed for convergence of VHQCAs. Here, we address this concern by investigating the classical optimizer in VHQCAs. We introduce a novel optimizer called individual Coupled Adaptive Number of Shots (iCANS). This adaptive optimizer frugally selects the number of measurements (i.e., number of shots) both for a given iteration and for a given partial derivative in a stochastic gradient descent. We numerically simulate the performance of iCANS for the variational quantum eigensolver and for variational quantum compiling, with and without noise. In all cases, and especially in the noisy case, iCANS tends to out-perform state-of-the-art optimizers for VHQCAs. We therefore believe this adaptive optimizer will be useful for realistic VHQCA implementations, where the number of measurements is limited.

109 citations


Journal ArticleDOI
TL;DR: A deep-reinforcement-learning-based quality-of-service (QoS)-aware secure routing protocol (DQSP) is proposed, which can extract knowledge from history traffic demands by interacting with the underlying network environment, and dynamically optimize the routing policy.
Abstract: Recently, with the proliferation of communication devices, Internet of Things (IoT) has become an emerging technology which facilitates massive devices to be enabled with connectivity by heterogeneous networks. However, it is usually a technical challenge for traditional networks to handle such a huge number of devices in an efficient manner. Recently, the software-defined network (SDN) technique with its agility and elasticity has been incorporated into IoT to meet the potential scale and flexibility requirements and form a novel IoT architecture also known as SDN-IoT. As the size of SDN-IoT increases, efficient routing protocols with low latency and high security are required, while the default routing protocols of SDN are still vulnerable to dynamic change of flow control rules especially when the network is under attack. To address the above issues, a deep-reinforcement-learning-based quality-of-service (QoS)-aware secure routing protocol (DQSP) is proposed in this article. While guaranteeing the QoS, our method can extract knowledge from history traffic demands by interacting with the underlying network environment, and dynamically optimize the routing policy. Extensive simulation experiments have been conducted with respect to several network performance metrics, demonstrating that our DQSP has good convergence and high effectiveness. Moreover, DQSP outperforms the traditional OSPF routing protocol, at least 10% relative performance gains in most cases.

96 citations


Journal ArticleDOI
TL;DR: A unified algorithmic framework that combines variance reduction with gradient tracking to achieve robust performance and fast convergence and provides explicit theoretical guarantees of the corresponding methods when the objective functions are smooth and strongly convex.
Abstract: Decentralized methods to solve finite-sum minimization problems are important in many signal processing and machine learning tasks where the data samples are distributed across a network of nodes, and raw data sharing is not permitted due to privacy and/or resource constraints. In this article, we review decentralized stochastic first-order methods and provide a unified algorithmic framework that combines variance reduction with gradient tracking to achieve robust performance and fast convergence. We provide explicit theoretical guarantees of the corresponding methods when the objective functions are smooth and strongly convex and show their applicability to nonconvex problems via numerical experiments. Throughout the article, we provide intuitive illustrations of the main technical ideas by casting appropriate tradeoffs and comparisons among the methods of interest and by highlighting applications to decentralized training of machine learning models.

95 citations



Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed chaos-enhanced bat algorithm is not only superior to the well-established algorithms such as the original method but also the latest improved approaches and can deal with unconstrained and constrained feature spaces, effectively.

87 citations


Journal ArticleDOI
TL;DR: A novel distributed dynamic event-triggered Newton-Raphson algorithm is proposed to solve the double-mode energy management problem in a fully distributed fashion and it is proved that each participant can asymptotically converge to the global optimal point.
Abstract: The islanded and network-connected modes are expected to be modeled into a unified form as well as in a distributed fashion for multi-energy system In this way, the adaptability and flexibility of multi-energy system can be enhanced To this aim, this paper establishes a double-mode energy management model for the multi-energy system It is formed by many energy bodies With such a model, each participant is able to adaptively respond to the change of mode switching Furthermore, a novel distributed dynamic event-triggered Newton-Raphson algorithm is proposed to solve the double-mode energy management problem in a fully distributed fashion In this method, the idea of Newton descent along with the dynamic event-triggered communication strategy are introduced and embedded in the execution of the proposed algorithm With this effort, each participant can adequately utilize the second-order information to speed up convergence The optimality is not affected Meanwhile, the proposed algorithm can be implemented with asynchronous communication and without needing special initialization conditions It exhibits better flexibility and adaptability especially when the system modes are changed In addition, the continuous-time algorithm is executed with discrete-time communication driven by the proposed dynamic event-triggered mechanismIt results in reduced communication interaction and avoids needing continuous-time information transmission It is also proved that each participant can asymptotically converge to the global optimal point Finally, simulation results show the effectiveness of the proposed model and illustrate the faster convergence feature of the proposed algorithm

Journal ArticleDOI
10 Jul 2020-Fractals
TL;DR: In this article, an attractive reliable analytical technique is implemented for constructing numerical solutions for the fractional Lienard model enclosed with suitable nonhomogeneous initial condi cation, which is used in this paper.
Abstract: In this paper, an attractive reliable analytical technique is implemented for constructing numerical solutions for the fractional Lienard’s model enclosed with suitable nonhomogeneous initial condi...

Journal ArticleDOI
TL;DR: This article proposes a high-order pseudopartial derivative-based model-free adaptive iterative learning controller (HOPPD-MFAILC) that can track the desired trajectory with improved convergence and tracking performance.
Abstract: Pneumatic artificial muscles (PAMs) have been widely used in actuation of medical devices due to their intrinsic compliance and high power-to-weight ratio features. However, the nonlinearity and time-varying nature of PAMs make it challenging to maintain high-performance tracking control. In this article, a high-order pseudopartial derivative-based model-free adaptive iterative learning controller (HOPPD-MFAILC) is proposed to achieve fast convergence speed. The dynamics of PAM is converted into a dynamic linearization model during iterations; meanwhile, a high-order estimation algorithm is designed to estimate the pseudopartial derivative component of the linearization model by only utilizing the input and output data in previous iterations. The stability and convergence performance of the controller are verified through theoretical analysis. Simulation and experimental results on PAM demonstrate that the proposed HOPPD-MFAILC can track the desired trajectory with improved convergence and tracking performance.

Journal ArticleDOI
TL;DR: A review of the progress of smoothed particle hydrodynamics towards high-order converged simulations as a mesh-free Lagrangian method suitable for complex flows with interfaces and multiple phases is presented.
Abstract: This paper presents a review of the progress of smoothed particle hydrodynamics (SPH) towards high-order converged simulations. As a mesh-free Lagrangian method suitable for complex flows with interfaces and multiple phases, SPH has developed considerably in the past decade. While original applications were in astrophysics, early engineering applications showed the versatility and robustness of the method without emphasis on accuracy and convergence. The early method was of weakly compressible form resulting in noisy pressures due to spurious pressure waves. This was effectively removed in the incompressible (divergence-free) form which followed; since then the weakly compressible form has been advanced, reducing pressure noise. Now numerical convergence studies are standard. While the method is computationally demanding on conventional processors, it is well suited to parallel processing on massively parallel computing and graphics processing units. Applications are diverse and encompass wave-structure interaction, geophysical flows due to landslides, nuclear sludge flows, welding, gearbox flows and many others. In the state of the art, convergence is typically between the first- and second-order theoretical limits. Recent advances are improving convergence to fourth order (and higher) and these will also be outlined. This can be necessary to resolve multi-scale aspects of turbulent flow.

Journal ArticleDOI
TL;DR: An integrated bi-modal computing paradigm based on Nonlinear Autoregressive Radial Basis Functions (NAR-RBFs) neural network model, a new family of deep learning with the strength of hybrid artificial neural network is presented for the solution of nonlinear chaotic dusty system (NCDS) of tiny ionized gas particles arising in fusion devices, industry, astronomy and space.
Abstract: Robust modeling of a multimodal dynamic system is a challenging and fast-growing area of research In this study, an integrated bi-modal computing paradigm based on Nonlinear Autoregressive Radial Basis Functions (NAR-RBFs) neural network model, a new family of deep learning with the strength of hybrid artificial neural network, is presented for the solution of nonlinear chaotic dusty system (NCDS) of tiny ionized gas particles arising in fusion devices, industry, astronomy, and space In the proposed methodology, special transformations are introduced for a class of differential equations, which convert the local optimum to a global optimum The proposed NAR-RBFs neural network model is implemented on bi-model NCDS represented with Van der Pol-Methiew Equation (VdP-ME) for different scenarios based on variation in dust gain production and loss for both small and large time domains Excellent agreement for proposed bimodal computing paradigm by the result with the standard state of the arts numerical solvers is verified by attaining RMSE up to 1E−38 for the nonlinear VDP-ME Accuracy of the proposed model in the critical time domain is also validated by convergence, stability and consistency analysis on statistics calculated from absolute error, root-mean-square error, and analysis of variance metrics

Journal ArticleDOI
TL;DR: The GT-VR framework as discussed by the authors is a stochastic and decentralized framework to minimize a finite-sum of functions available over a network of nodes, which is particularly suitable for problems where large-scale, potentially private data, cannot be collected or processed at a centralized server.
Abstract: This paper describes a novel algorithmic framework to minimize a finite-sum of functions available over a network of nodes. The proposed framework, that we call GT-VR , is stochastic and decentralized, and thus is particularly suitable for problems where large-scale, potentially private data, cannot be collected or processed at a centralized server. The GT-VR framework leads to a family of algorithms with two key ingredients: (i) local variance reduction , that enables estimating the local batch gradients from arbitrarily drawn samples of local data; and, (ii) global gradient tracking , which fuses the gradient information across the nodes. Naturally, combining different variance reduction and gradient tracking techniques leads to different algorithms of interest with valuable practical tradeoffs and design considerations. Our focus in this paper is on two instantiations of the ${\bf \mathtt {GT-VR}}$ framework, namely GT-SAGA and GT-SVRG , that, similar to their centralized counterparts ( SAGA and SVRG ), exhibit a compromise between space and time. We show that both GT-SAGA and GT-SVRG achieve accelerated linear convergence for smooth and strongly convex problems and further describe the regimes in which they achieve non-asymptotic, network-independent linear convergence rates that are faster with respect to the existing decentralized first-order schemes. Moreover, we show that both algorithms achieve a linear speedup in such regimes compared to their centralized counterparts that process all data at a single node. Extensive simulations illustrate the convergence behavior of the corresponding algorithms.

Journal ArticleDOI
TL;DR: A new sliding surface is first proposed and a robust control is developed for ensuring global approximate fixed-time convergence of tracking errors and it is proven that the position tracking errors globally converge to an arbitrary small set within a uniformly bounded time and then go to zero exponentially.

Journal ArticleDOI
TL;DR: This paper proposes a least-squares modulated composite learning robot control based on Moore–Penrose pseudoinverse to improve the performance of parameter convergence and demonstrates the effectiveness and superiority of the proposed approach.

Journal ArticleDOI
TL;DR: This paper provides a unified framework for the analysis and design of parameter estimators and shows that they lie at the core of some modified schemes recently proposed in the literature, and uses this framework to propose some new schemes with relaxed conditions for convergence and improved transient performance.

Journal ArticleDOI
TL;DR: This paper considers a class of representative problems and proposes a novel iterative algorithm for PinT computation that can solve the PDEs for all the discrete time points simultaneously via the diagonalization technique proposed recently.

Proceedings Article
15 Jun 2020
TL;DR: A new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (MON), is developed, which vastly outperform the Neural ODE-based models while also being more computationally efficient.
Abstract: Implicit-depth models such as Deep Equilibrium Networks have recently been shown to match or exceed the performance of traditional deep networks while being much more memory efficient. However, these models suffer from unstable convergence to a solution and lack guarantees that a solution exists. On the other hand, Neural ODEs, another class of implicit-depth models, do guarantee existence of a unique solution but perform poorly compared with traditional networks. In this paper, we develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (MON). We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem, which admits efficient solvers with guaranteed, stable convergence. We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point. Finally, we show how to instantiate several versions of these models, and implement the resulting iterative solvers, for structured linear operators such as multi-scale convolutions. The resulting models vastly outperform the Neural ODE-based models while also being more computationally efficient. Code is available at this http URL.

Journal ArticleDOI
TL;DR: In this article, the authors study the training process of deep neural networks from the Fourier analysis perspective and demonstrate that DNNs often fit target functions from low to high frequencies on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural network such as VGG16.
Abstract: We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective. We demonstrate a very universal Frequency Principle (F-Principle) --- DNNs often fit target functions from low to high frequencies --- on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural networks such as VGG16. This F-Principle of DNNs is opposite to the behavior of most conventional iterative numerical schemes (e.g., Jacobi method), which exhibit faster convergence for higher frequencies for various scientific computing problems. With a simple theory, we illustrate that this F-Principle results from the regularity of the commonly used activation functions. The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function. This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or randomized dataset.

Journal ArticleDOI
TL;DR: A new family of very high order accurate direct Arbitrary-Lagrangian-Eulerian Finite Volume and Discontinuous Galerkin schemes for the solution of nonlinear hyperbolic PDE systems on moving two-dimensional Voronoi meshes that are regenerated at each time step and which explicitly allow topology changes in time.

Journal ArticleDOI
TL;DR: A distributed optimization algorithm, combined with a continuous integral sliding-mode control scheme, is proposed to solve this finite-time optimization problem of multiagent systems in the presence of disturbances, while rejecting local disturbance signals.
Abstract: This paper presents continuous distributed algorithms to solve the finite-time distributed convex optimization problems of multiagent systems in the presence of disturbances. The objective is to design distributed algorithms such that a team of agents seeks to minimize the sum of local objective functions in a finite-time and robust manner. Specifically, a distributed optimization algorithm, combined with a continuous integral sliding-mode control scheme, is proposed to solve this finite-time optimization problem, while rejecting local disturbance signals. The developed algorithm is further applied to solve economic dispatch and resource allocation problems, and proven that under proposed schemes, the optimal solution can be achieved in finite time, while satisfying both global equality and local inequality constraints. Examples and numerical simulations are provided to show the effectiveness of the proposed methods.

Posted Content
TL;DR: It is shown that although the minimizers of cross-entropy and related classification losses are off at infinity, network weights learned by gradient flow converge in direction, with an immediate corollary that network predictions, training errors, and the margin distribution also converge.
Abstract: In this paper, we show that although the minimizers of cross-entropy and related classification losses are off at infinity, network weights learned by gradient flow converge in direction, with an immediate corollary that network predictions, training errors, and the margin distribution also converge. This proof holds for deep homogeneous networks -- a broad class of networks allowing for ReLU, max-pooling, linear, and convolutional layers -- and we additionally provide empirical support not just close to the theory (e.g., the AlexNet), but also on non-homogeneous networks (e.g., the DenseNet). If the network further has locally Lipschitz gradients, we show that these gradients also converge in direction, and asymptotically align with the gradient flow path, with consequences on margin maximization, convergence of saliency maps, and a few other settings. Our analysis complements and is distinct from the well-known neural tangent and mean-field theories, and in particular makes no requirements on network width and initialization, instead merely requiring perfect classification accuracy. The proof proceeds by developing a theory of unbounded nonsmooth Kurdyka-Łojasiewicz inequalities for functions definable in an o-minimal structure, and is also applicable outside deep learning.

Journal ArticleDOI
TL;DR: In this paper, an adaptive filtering-based recursive identification algorithm for joint estimation of states and parameters of bilinear state-space systems with an autoregressive moving average noise was developed.
Abstract: This study develops an adaptive filtering-based recursive identification algorithm for joint estimation of states and parameters of bilinear state-space systems with an autoregressive moving average noise. In order to handle the correlated noise and unmeasurable states in parameter estimation, an adaptive filter is established to whiten the coloured noise and a bilinear state observer is constructed to update the unavailable states recursively. Then a hierarchical generalised extended least squares (HGELS) algorithm and an adaptive filtering-based HGELS algorithm are developed for simultaneously estimating the unknown states and parameters. The convergence analysis indicates that the parameter estimates can converge to their true values. A numerical example illustrates the convergence results.

Journal ArticleDOI
TL;DR: An improved algorithm to the variational iteration algorithm-II (VIA-II) for the numerical treatment of diffusion as well as convection-diffusion equations is presented, which yields accurate results, converges rapidly, and offers better robustness in comparison with other methods used in the literature.
Abstract: Variational iteration method has been extensively employed to deal with linear and nonlinear differential equations of integer and fractional order. The key property of the technique is its ability and flexibility to investigate linear and nonlinear models conveniently and accurately. The current study presents an improved algorithm to the variational iteration algorithm-II (VIA-II) for the numerical treatment of diffusion as well as convection-diffusion equations. This newly introduced modification is termed as the modified variational iteration algorithm-II (MVIA-II). The convergence of the MVIA-II is studied in the case of solving nonlinear equations. The main advantage of the MVIA-II improvement is an auxiliary parameter which makes sure a fast convergence of the standard VIA-II iteration algorithm. In order to verify the stability, accuracy, and computational speed of the method, the obtained solutions are compared numerically and graphically with the exact ones as well as with the results obtained by the previously proposed compact finite difference method and second kind Chebyshev wavelets. The comparison revealed that the modified version yields accurate results, converges rapidly, and offers better robustness in comparison with other methods used in the literature. Moreover, the basic idea depicted in this study is relied upon the possibility of the MVIA-II being utilized to handle nonlinear differential equations that arise in different fields of physical and biological sciences. A strong motivation for such applications is the fact that any discretization, transformation, or any assumptions are not required for this proposed algorithm in finding appropriate numerical solutions.

Journal ArticleDOI
TL;DR: The iCIPT2 as discussed by the authors algorithm is based on the Epstein-Nesbet second-order perturbation theory (PT2) and is shown to achieve state-of-the-art performance on the C2, O2, Cr2 and C6H6.
Abstract: Even when starting with very poor initial guess, the iterative configuration interaction (iCI) approach [J. Chem. Theory Comput. 12, 1169 (2016)] for strongly correlated electrons can converge from above to full CI (FCI) very quickly by constructing and diagonalizing a very small Hamiltonian matrix at each macro/micro-iteration. However, as a direct solver of the FCI problem, iCI is computationally very expensive. The problem can be mitigated by observing that a vast number of configurations have little weights in the wave function and hence do not contribute discernibly to the correlation energy. The real questions are as follows: (a) how to identify those important configurations as early as possible in the calculation and (b) how to account for the residual contributions of those unimportant configurations. It is generally true that if a high-quality yet compact variational space can be determined for describing static correlation, a low-order treatment of the residual dynamic correlation would then be sufficient. While this is common to all selected CI schemes, the "iCI with selection" scheme presented here has the following distinctive features: (1) the full spin symmetry is always maintained by taking configuration state functions (CSF) as the many-electron basis. (2) Although the selection is performed on individual CSFs, it is orbital configurations (oCFGs) that are used as the organizing units. (3) Given a coefficient pruning-threshold Cmin (which determines the size of the variational space for static correlation), the selection of important oCFGs/CSFs is performed iteratively until convergence. (4) At each iteration, for the growth of the wave function, the first-order interacting space is decomposed into disjoint subspaces so as to reduce memory requirement on the one hand and facilitate parallelization on the other hand. (5) Upper bounds (which involve only two-electron integrals) for the interactions between doubly connected oCFG pairs are used to screen each first-order interacting subspace before the first-order coefficients of individual CSFs are evaluated. (6) Upon convergence of the static correlation for a given Cmin, dynamic correlation is estimated using the state-specific Epstein-Nesbet second-order perturbation theory (PT2). The efficacy of the iCIPT2 scheme is demonstrated numerically using benchmark examples, including C2, O2, Cr2, and C6H6.

Journal ArticleDOI
TL;DR: A novel machine-learning based on an evolutionary algorithm, namely Cuckoo search (CS) to solve the local minimum problem of ML in the most radical way and completely outperforms CS, ML, and other hybrid ML in terms of accuracy and considerably reduces calculational costs compared to CS.

Journal ArticleDOI
TL;DR: Lyapunov stability analysis shows that the presented method guarantees fixed-time convergence of the tracking error to a small neighborhood around zero while all the other closed-loop signals keep bounded.
Abstract: This paper considers fixed-time control problem of nonstrict-feedback nonlinear system subjected to deadzone and output constraint. First, tan-type Barrier Lyapunov function (BLF) is constructed to keep system output within constraint. Next, unknown nonlinear function is approximated by radial basis function neural network (RBFNN). Using the property of Gaussian radial basis function, the upper bound of the term containing the unknown nonlinear function is derived and the updating law is proposed to estimate the square of the norm of the neural network weights. Then, virtual control inputs are developed using backstepping design and their derivatives are obtained by fixed-time differentiator. Finally, the actual control input is designed based on deadzone inverse approach. Lyapunov stability analysis shows that the presented method guarantees fixed-time convergence of the tracking error to a small neighborhood around zero while all the other closed-loop signals keep bounded. The presented control strategy addresses algebraic-loop problem, overcomes explosion of complexity and reduces the number of adaptation parameters, which is easy to be implemented with less computation burden. The presented control scheme is applied to academic system, real electromechanical system and aircraft longitudinal system and simulation results demonstrate its effectiveness.