scispace - formally typeset
Search or ask a question

Showing papers on "Discretization published in 2019"


Journal ArticleDOI
TL;DR: In this paper, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations is proposed, using neural networks to estimate spatial derivatives, which are optimized end to end to best satisfy the equations on a low-resolution grid.
Abstract: The numerical solution of partial differential equations (PDEs) is challenging because of the need to resolve spatiotemporal features over wide length- and timescales. Often, it is computationally intractable to resolve the finest features in the solution. The only recourse is to use approximate coarse-grained representations, which aim to accurately represent long-wavelength dynamics while properly accounting for unresolved small-scale physics. Deriving such coarse-grained equations is notoriously difficult and often ad hoc. Here we introduce data-driven discretization, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations. Our approach uses neural networks to estimate spatial derivatives, which are optimized end to end to best satisfy the equations on a low-resolution grid. The resulting numerical methods are remarkably accurate, allowing us to integrate in time a collection of nonlinear equations in 1 spatial dimension at resolutions 4× to 8× coarser than is possible with standard finite-difference methods.

252 citations


Journal ArticleDOI
TL;DR: In this paper, a data-based approach to turbulence modeling by artificial neural networks is presented, which can generalize from the data and learn approximations with a cross correlation of up to 47% and even 73% for the inner elements, demonstrating that the perfect closure can indeed be learned from provided coarse grid data.

216 citations


Journal ArticleDOI
TL;DR: This work proposes a new method for solving high-dimensional fully nonlinear second-order PDEs and shows the efficiency and the accuracy of the method in the cases of a 100-dimensional Black–Scholes–Barenblatt equation, a100-dimensional Hamilton–Jacobi–Bellman equation, and a nonlinear expectation of a 200-dimensional G-Brownian motion.
Abstract: High-dimensional partial differential equations (PDEs) appear in a number of models from the financial industry, such as in derivative pricing models, credit valuation adjustment models, or portfolio optimization models. The PDEs in such applications are high-dimensional as the dimension corresponds to the number of financial assets in a portfolio. Moreover, such PDEs are often fully nonlinear due to the need to incorporate certain nonlinear phenomena in the model such as default risks, transaction costs, volatility uncertainty (Knightian uncertainty), or trading constraints in the model. Such high-dimensional fully nonlinear PDEs are exceedingly difficult to solve as the computational effort for standard approximation methods grows exponentially with the dimension. In this work, we propose a new method for solving high-dimensional fully nonlinear second-order PDEs. Our method can in particular be used to sample from high-dimensional nonlinear expectations. The method is based on (1) a connection between fully nonlinear second-order PDEs and second-order backward stochastic differential equations (2BSDEs), (2) a merged formulation of the PDE and the 2BSDE problem, (3) a temporal forward discretization of the 2BSDE and a spatial approximation via deep neural nets, and (4) a stochastic gradient descent-type optimization procedure. Numerical results obtained using TensorFlow in Python illustrate the efficiency and the accuracy of the method in the cases of a 100-dimensional Black–Scholes–Barenblatt equation, a 100-dimensional Hamilton–Jacobi–Bellman equation, and a nonlinear expectation of a 100-dimensional G-Brownian motion.

208 citations


Journal ArticleDOI
TL;DR: A new family of Monte Carlo methods based upon a multi-dimensional version of the Zig-Zag process of (Bierkens, Roberts, 2017), a continuous time piecewise deterministic Markov process is introduced.
Abstract: Standard MCMC methods can scale poorly to big data settings due to the need to evaluate the likelihood at each iteration. There have been a number of approximate MCMC algorithms that use sub-sampling ideas to reduce this computational burden, but with the drawback that these algorithms no longer target the true posterior distribution. We introduce a new family of Monte Carlo methods based upon a multidimensional version of the Zig-Zag process of [Ann. Appl. Probab. 27 (2017) 846–882], a continuous-time piecewise deterministic Markov process. While traditional MCMC methods are reversible by construction (a property which is known to inhibit rapid convergence) the Zig-Zag process offers a flexible nonreversible alternative which we observe to often have favourable convergence properties. We show how the Zig-Zag process can be simulated without discretisation error, and give conditions for the process to be ergodic. Most importantly, we introduce a sub-sampling version of the Zig-Zag process that is an example of an exact approximate scheme, that is, the resulting approximate process still has the posterior as its stationary distribution. Furthermore, if we use a control-variate idea to reduce the variance of our unbiased estimator, then the Zig-Zag process can be super-efficient: after an initial preprocessing step, essentially independent samples from the posterior distribution are obtained at a computational cost which does not depend on the size of the data.

159 citations


Journal ArticleDOI
TL;DR: This article presents the deformation space formulation for soft robots dynamics, developed using a finite element approach starting from the Cosserat rod theory formulated on a Lie group to derive a discrete model using a helicoidal shape function for the spatial discretization and a geometric scheme for the time integration of the robot shape configuration.
Abstract: Mathematical modeling of soft robots is complicated by the description of the continuously deformable three-dimensional shape that they assume when subjected to external loads. In this article we present the deformation space formulation for soft robots dynamics, developed using a finite element approach. Starting from the Cosserat rod theory formulated on a Lie group, we derive a discrete model using a helicoidal shape function for the spatial discretization and a geometric scheme for the time integration of the robot shape configuration. The main motivation behind this work is the derivation of accurate and computational efficient models for soft robots. The model takes into account bending, torsion, shear, and axial deformations due to general external loading conditions. It is validated through analytic and experimental benchmark. The results demonstrate that the model matches experimental positions with errors <1% of the robot length. The computer implementation of the model results in SimSOFT, a dynamic simulation environment for design, analysis, and control of soft robots.

148 citations


Journal ArticleDOI
TL;DR: In this article, a concise overview on numerical schemes for the sub-diffusion model with nonsmooth problem data is given, which are important for the numerical analysis of many problems arising in optimal control, inverse problems and stochastic analysis.

139 citations


Journal ArticleDOI
TL;DR: The theoretical analysis and high-accuracy of the proposed method are verified, and Comparative results indicate that the accuracy of the new discretization technique is superior to the other methods available in the literature.

130 citations


Posted Content
TL;DR: In this paper, an Adjoint-based Neural ODE framework is proposed to solve the problem of numerical instability in neural ODEs, which has a memory footprint of O(L) + O(N_t), with the same computational cost as reversing ODE solve.
Abstract: Residual neural networks can be viewed as the forward Euler discretization of an Ordinary Differential Equation (ODE) with a unit time step. This has recently motivated researchers to explore other discretization approaches and train ODE based networks. However, an important challenge of neural ODEs is their prohibitive memory cost during gradient backpropogation. Recently a method proposed in [8], claimed that this memory overhead can be reduced from O(LN_t), where N_t is the number of time steps, down to O(L) by solving forward ODE backwards in time, where L is the depth of the network. However, we will show that this approach may lead to several problems: (i) it may be numerically unstable for ReLU/non-ReLU activations and general convolution operators, and (ii) the proposed optimize-then-discretize approach may lead to divergent training due to inconsistent gradients for small time step sizes. We discuss the underlying problems, and to address them we propose ANODE, an Adjoint based Neural ODE framework which avoids the numerical instability related problems noted above, and provides unconditionally accurate gradients. ANODE has a memory footprint of O(L) + O(N_t), with the same computational cost as reversing ODE solve. We furthermore, discuss a memory efficient algorithm which can further reduce this footprint with a trade-off of additional computational cost. We show results on Cifar-10/100 datasets using ResNet and SqueezeNext neural networks.

122 citations


Journal ArticleDOI
07 Mar 2019-PLOS ONE
TL;DR: When considering inter-observer reproducible results of MRI texture radiomics features, an absolute discretization should be favored to allow the extraction of the highest number of potential candidates for new imaging biomarkers.
Abstract: Objectives To assess the influence of gray-level discretization on inter- and intra-observer reproducibility of texture radiomics features on clinical MR images. Materials and methods We studied two independent MRI datasets of 74 lacrymal gland tumors and 30 breast lesions from two different centers. Two pairs of readers performed three two-dimensional delineations for each dataset. Texture features were extracted using two radiomics softwares (Pyradiomics and an in-house software). Reproducible features were selected using a combination of intra-class correlation coefficient (ICC) and concordance and coherence coefficient (CCC) with 0.8 and 0.9 as thresholds, respectively. We tested six absolute and eight relative gray-level discretization methods and analyzed the distribution and highest number of reproducible features obtained for each discretization. We also analyzed the number of reproducible features extracted from computer simulated delineations representative of inter-observer variability. Results The gray-level discretization method had a direct impact on texture feature reproducibility, independent of observers, software or method of delineation (simulated vs. human). The absolute discretization consistently provided statistically significantly more reproducible features than the relative discretization. Varying the bin number of relative discretization led to statistically significantly more variable results than varying the bin size of absolute discretization. Conclusions When considering inter-observer reproducible results of MRI texture radiomics features, an absolute discretization should be favored to allow the extraction of the highest number of potential candidates for new imaging biomarkers. Whichever the chosen method, it should be systematically documented to allow replicability of results.

114 citations


Journal ArticleDOI
TL;DR: In this paper, a detailed review of existing formulations of Kirchhoff-love and Simo-Reissner type for highly slender beams is presented, where two different rotation interpolation schemes with strong or weak Kirchoff constraint enforcement, as well as two different choices of nodal triad parametrizations in terms of rotation or tangent vectors are proposed.
Abstract: The present work focuses on geometrically exact finite elements for highly slender beams. It aims at the proposal of novel formulations of Kirchhoff–Love type, a detailed review of existing formulations of Kirchhoff–Love and Simo–Reissner type as well as a careful evaluation and comparison of the proposed and existing formulations. Two different rotation interpolation schemes with strong or weak Kirchhoff constraint enforcement, respectively, as well as two different choices of nodal triad parametrizations in terms of rotation or tangent vectors are proposed. The combination of these schemes leads to four novel finite element variants, all of them based on a $$C^1$$ -continuous Hermite interpolation of the beam centerline. Essential requirements such as representability of general 3D, large deformation, dynamic problems involving slender beams with arbitrary initial curvatures and anisotropic cross-section shapes, preservation of objectivity and path-independence, consistent convergence orders, avoidance of locking effects as well as conservation of energy and momentum by the employed spatial discretization schemes, but also a range of practically relevant secondary aspects will be investigated analytically and verified numerically for the different formulations. It will be shown that the geometrically exact Kirchhoff–Love beam elements proposed in this work are the first ones of this type that fulfill all these essential requirements. On the contrary, Simo–Reissner type formulations fulfilling these requirements can be found in the literature very well. However, it will be argued that the shear-free Kirchhoff–Love formulations can provide considerable numerical advantages such as lower spatial discretization error levels, improved performance of time integration schemes as well as linear and nonlinear solvers and smooth geometry representation as compared to shear-deformable Simo–Reissner formulations when applied to highly slender beams. Concretely, several representative numerical test cases confirm that the proposed Kirchhoff–Love formulations exhibit a lower discretization error level as well as a considerably improved nonlinear solver performance in the range of high beam slenderness ratios as compared to two representative Simo–Reissner element formulations from the literature.

112 citations


Journal ArticleDOI
TL;DR: Devito as mentioned in this paper is a domain-specific language for implementing high-performance finite-difference partial differential equation solvers for exploration of the Earth's subsurface using full waveform inversion and reverse-time migration.
Abstract: . We introduce Devito, a new domain-specific language for implementing high-performance finite-difference partial differential equation solvers. The motivating application is exploration seismology for which methods such as full-waveform inversion and reverse-time migration are used to invert terabytes of seismic data to create images of the Earth's subsurface. Even using modern supercomputers, it can take weeks to process a single seismic survey and create a useful subsurface image. The computational cost is dominated by the numerical solution of wave equations and their corresponding adjoints. Therefore, a great deal of effort is invested in aggressively optimizing the performance of these wave-equation propagators for different computer architectures. Additionally, the actual set of partial differential equations being solved and their numerical discretization is under constant innovation as increasingly realistic representations of the physics are developed, further ratcheting up the cost of practical solvers. By embedding a domain-specific language within Python and making heavy use of SymPy, a symbolic mathematics library, we make it possible to develop finite-difference simulators quickly using a syntax that strongly resembles the mathematics. The Devito compiler reads this code and applies a wide range of analysis to generate highly optimized and parallel code. This approach can reduce the development time of a verified and optimized solver from months to days.

Journal ArticleDOI
TL;DR: A novel energy management strategy with a deep reinforcement learning framework Actor-Critic, which surpasses discretization-based strategies by directly optimizing in the continuous space, which improves energy management performance while blackucing computation load.

Proceedings Article
01 Jan 2019
TL;DR: It is shown that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent.
Abstract: Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification. © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved.

Journal ArticleDOI
TL;DR: This study presents the first multi-resolution particle method for FSI corresponding to incompressible fluid and elastic structures, and a set of previously developed enhanced schemes are also adopted for the fluid model.

Journal ArticleDOI
TL;DR: It is proved that ANN models are able to approximate every time-dependent model described by ODEs with any desired level of accuracy, and is tested on different problems, including the model reduction of two large-scale models.

Book ChapterDOI
01 Jan 2019
TL;DR: A simple approach towards this direction, preliminary simulations support this approach and the set of solutions needs to be transformed/twisted so that the combination of the proper twist and the appropriate linear combination recovers an accurate approximation.
Abstract: The reduced basis method allows to propose accurate approximations for many parameter dependent partial differential equations, almost in real time, at least if the Kolmogorov n-width of the set of all solutions, under variation of the parameters, is small. The idea is that any solutions may be well approximated by the linear combination of some well chosen solutions that are computed offline once and for all (by another, more expensive, discretization) for some well chosen parameter values. In some cases, however, such as problems with large convection effects, the linear representation is not sufficient and, as a consequence, the set of solutions needs to be transformed/twisted so that the combination of the proper twist and the appropriate linear combination recovers an accurate approximation. This paper presents a simple approach towards this direction, preliminary simulations support this approach.

Journal ArticleDOI
TL;DR: A formalism is proposed that describes all these methods within a common mathematical framework, and in this way allows us to draw direct links between them, and emphasizes the importance of an accurate representation of the equilibrium state, independently of the choice of moment space.
Abstract: Over the last decades, several types of collision models have been proposed to extend the validity domain of the lattice Boltzmann method (LBM), each of them being introduced in its own formalism. This article proposes a formalism that describes all these methods within a common mathematical framework, and in this way allows us to draw direct links between them. Here, the focus is put on single and multirelaxation time collision models in either their raw moment, central moment, cumulant, or regularized form. In parallel with that, several bases (nonorthogonal, orthogonal, Hermite) are considered for the polynomial expansion of populations. General relationships between moments are first derived to understand how moment spaces are related to each other. In addition, a review of collision models further sheds light on collision models that can be rewritten in a linear matrix form. More quantitative mathematical studies are then carried out by comparing explicit expressions for the post-collision populations. Thanks to this, it is possible to deduce the impact of both the polynomial basis (raw, Hermite, central, central Hermite, cumulant) and the inclusion of regularization steps on isothermal LBMs. Extensive results are provided for the D1Q3, D2Q9, and D3Q27 lattices, the latter being further extended to the D3Q19 velocity discretization. Links with the most common two and multirelaxation time collision models are also provided for the sake of completeness. This work ends by emphasizing the importance of an accurate representation of the equilibrium state, independently of the choice of moment space. As an addition to the theoretical purpose of this article, general instructions are provided to help the reader with the implementation of the most complicated collision models.

Journal ArticleDOI
Kyongmin Yeo1, Igor Melnyk1
TL;DR: The deep learning model DE-LSTM, which aims to approximate the probability density function of a stochastic process via numerical discretization and the underlying nonlinear dynamics is modeled by the Long Short-Term Memory network, makes a good prediction of the probability distribution without assuming any distributional properties of the stochastics process.

Posted Content
TL;DR: The additive powers-of-two quantization (APoT) as mentioned in this paper quantization is an efficient non-uniform quantization scheme for the bell-shaped and long-tailed distribution of weights and activations in neural networks.
Abstract: We propose Additive Powers-of-Two~(APoT) quantization, an efficient non-uniform quantization scheme for the bell-shaped and long-tailed distribution of weights and activations in neural networks. By constraining all quantization levels as the sum of Powers-of-Two terms, APoT quantization enjoys high computational efficiency and a good match with the distribution of weights. A simple reparameterization of the clipping function is applied to generate a better-defined gradient for learning the clipping threshold. Moreover, weight normalization is presented to refine the distribution of weights to make the training more stable and consistent. Experimental results show that our proposed method outperforms state-of-the-art methods, and is even competitive with the full-precision models, demonstrating the effectiveness of our proposed APoT quantization. For example, our 4-bit quantized ResNet-50 on ImageNet achieves 76.6% top-1 accuracy without bells and whistles; meanwhile, our model reduces 22% computational cost compared with the uniformly quantized counterpart. The code is available at this https URL

Journal ArticleDOI
TL;DR: In this article, the authors proposed an analytical procedure for deriving the optimal penalty constant, more precisely, its lower bound, which guarantees a sufficiently accurate enforcement of the crack phase-field irreversibility.

Journal ArticleDOI
TL;DR: This article rigorously proves the unconditional energy stability for the semi-implicit schemes and the fully discrete scheme and compares the SAV, invariant energy quadratization (IEQ), and stabilization approaches.

Journal ArticleDOI
TL;DR: The applied discretization techniques improve the stability of original LB models and enhance the robustness of compressible flow problems by preventing the formation of oscillation.
Abstract: In this article, we propose a numerical framework based on multiple relaxation time lattice Boltzmann (LB) model and novel discretization techniques for simulating compressible flows. Highly effici...

Journal ArticleDOI
TL;DR: A localized version of the MFS (LMFS), based on the “global” boundary discretization, is proposed for the large-scale modeling of two-dimensional (2D) elasticity problems and yields a sparse and banded matrix system which makes the method very attractive for large- scale simulations.

Journal ArticleDOI
TL;DR: A framework namely modified discretization and feature selection based on mutual information is proposed that incorporates JBMI based feature selection and dynamicDiscretization, both of which use a χ2 based searching method.

Journal ArticleDOI
TL;DR: The temporal discretization is put prior to the spatial discretized, in this study, and all the degrees of freedom are valid only within the current time step, leading to the transfer issue of degrees offreedom being resolved elegantly.
Abstract: Since the advent of finite element methods, the dynamic response analysis of solids and structures follows such a route without exception. Firstly the spatial discretization is carried out and the system of second order ordinary differential equations with the degrees of freedom as the unknown functions of time is derived, which is called the semi-discrete scheme. Then the temporal discretization is performed to the system of ordinary differential equations and the system of algebraic equations, referred to as the fully-discrete scheme, is obtained. This route has been working well for most problems, where, the meshes deform continuously and, in all the time steps, all the degrees of freedom are valid and the number of them keeps invariant. In the simulation of crack propagation, however, even the number of degrees of freedom varies with crack propagation and those degrees of freedom associated with crack tips become meaningless after the crack tips move away. While this causes no difficulties in linear static solutions, it is not readily handled in time-dependent solutions, leading to the transfer issue of degrees of freedom. Opposite to the conventional order of discretization, in this study the temporal discretization is put prior to the spatial discretization. In this way, all the degrees of freedom are valid only within the current time step. The transfer issue of degrees of freedom is accordingly resolved elegantly. The implementation of the proposed procedure is in the framework of the numerical manifold method, illustrated by some typical examples, where compressed and sheared cracks are involved with frictional contact.

Journal ArticleDOI
TL;DR: The aim of this paper is to study systematically the problem of consistent discretization of the so-called generalized homogeneous non-linear systems, where the discretized model is consistent if it preserves the stability property of the original continuous-time system.
Abstract: Algorithms of implicit discretization for generalized homogeneous systems having discontinuity only at the origin are developed They are based on the transformation of the original system to an equivalent one which admits an implicit or a semi-implicit discretization schemes preserving the stability properties of the continuous-time system Namely, the discretized model remains finite-time stable (in the case of negative homogeneity degree), and practically fixed-time stable (in the case of positive homogeneity degree) The theoretical results are supported with numerical examples 1 Introduction Discretization issues are important for a digital implementation of estimation and control algorithms Construction of a consistent stable discretization is complex for essentially non-linear ordinary differential equations (ODEs), which do not satisfy some classical regularity assumptions For example, the sliding mode algorithms are known to be difficult in practical realization [1], [2], [3] due to discontinuous (set-valued) nature, which may invoke chattering caused by the discretization The mentioned papers have discovered that the implicit discretization technique is useful for practical implementation of non-smooth and discontinuous control and estimation algorithms In particular, chattering suppression in both input and output, as well as a good closed-loop performance has been confirmed experimentally in [1], [4], [5] Finite-time stability is a desirable property for many control and estimation algorithms [6], [7], [8], [9], [10], [11] It means that system trajectories reach a stable equilibrium (or a set) in a finite time, in contrast to asymptotic stability allowing this only for the time tending to infinity If the settling (reaching) time is globally bounded for all initial conditions then the origin is fixed-time stable (see, eg [12]) The corresponding ODE models do not satisfy Lipschitz condition (at least at the origin) In the general case, an application of the conventional implicit or explicit discretization schemes does not guarantee that finite-time or fixed-time stability properties will be preserved (see, eg [13], [14], [15]) The latter means that the discrete-time model may be inconsistent with the continuous-time one However, the discretized systems may remain globally finite-time stable in some cases (see [1], [2], [16], [17]) The aim of this paper is to study systematically the problem of consistent discretization of the so-called generalized homogeneous non-linear systems The discretized model is consistent if it preserves the stability property (eg exponential, finite-time or fixed-time stability) of the original continuous-time system Homogeneity is a certain form of symmetry studied in systems and control theory [9], [18], [19], [20],[21], [22], [23] The standard homogeneity (introduced originally by L Euler in 17th century) is the symmetry of a mathematical object f (eg function, vector field, operator, etc) with respect to the uniform dilation of the argument x → λx, namely, f (λx) = λ 1+ν f (x), λ > 0

Journal ArticleDOI
TL;DR: A penalty approach for coupling adjacent patches that requires only a single, dimensionless penalty coefficient for both displacement and rotation coupling terms, alleviating the problem-dependent nature of the penalty parameters is presented.

Posted Content
TL;DR: It is shown that Nesterov acceleration arises from discretizing an ordinary differential equation with a semi-implicit Euler integration scheme, and it is suggested that a curvature-dependent damping term lies at the heart of the phenomenon.
Abstract: We present a dynamical system framework for understanding Nesterov's accelerated gradient method. In contrast to earlier work, our derivation does not rely on a vanishing step size argument. We show that Nesterov acceleration arises from discretizing an ordinary differential equation with a semi-implicit Euler integration scheme. We analyze both the underlying differential equation as well as the discretization to obtain insights into the phenomenon of acceleration. The analysis suggests that a curvature-dependent damping term lies at the heart of the phenomenon. We further establish connections between the discretized and the continuous-time dynamics.

Journal ArticleDOI
TL;DR: In this article, a unified numerical formulation is developed in variational framework to investigate the thermal buckling of different shapes of functionally graded carbon nanotube reinforced composite (FG-CNTRC) plates.
Abstract: In the present study, based on the higher-order shear deformation plate theory, the unified numerical formulation is developed in variational framework to investigate the thermal buckling of different shapes of functionally graded carbon nanotube reinforced composite (FG-CNTRC) plates. Since the thermal environment has considerable effects on the material properties of carbon nanotubes (CNTs), the temperature-dependent (TD) thermo-mechanical material properties are taken into account. In order to present the governing equations, the quadratic form of the energy functional of the plate structure is derived and its discretized counterparts are presented employing the variational differential quadrature (VDQ) approach. The discretized equations of motion are finally obtained based on Hamilton's principle. In order to convenient application of differential quadrature numerical operators in irregular physical domain, the mapping procedure is considered in accordance to the conventional finite element formulation. Some comparison and convergence studies are performed to show validity and efficiency of the proposed approach. A wide range of numerical results are also reported to analyze the thermal buckling behavior of different shaped FG-CNTRC plates.

Journal ArticleDOI
TL;DR: A novel guidance algorithm based on convex optimization, pseudospectral discretization, and a model predictive control (MPC) framework is proposed to solve the highly nonlinear and c-linear guidance problems.
Abstract: In this paper, a novel guidance algorithm based on convex optimization, pseudospectral discretization, and a model predictive control (MPC) framework is proposed to solve the highly nonlinear and c...