scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 2022"


Journal ArticleDOI
TL;DR: A versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations is introduced, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.
Abstract: Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.

782 citations


Journal ArticleDOI
TL;DR: A three-track network produces structure predictions with accuracies approaching those of DeepMind in CASP14, enables the rapid solution of challenging X-ray crystallography and cryo-EM structure modeling problems, and provides insights into the functions of proteins of currently unknown structure.
Abstract: DeepMind presented remarkably accurate predictions at the recent CASP14 protein structure prediction assessment conference. We explored network architectures incorporating related ideas and obtained the best performance with a three-track network in which information at the 1D sequence level, the 2D distance map level, and the 3D coordinate level is successively transformed and integrated. The three-track network produces structure predictions with accuracies approaching those of DeepMind in CASP14, enables the rapid solution of challenging X-ray crystallography and cryo-EM structure modeling problems, and provides insights into the functions of proteins of currently unknown structure. The network also enables rapid generation of accurate protein-protein complex models from sequence information alone, short circuiting traditional approaches which require modeling of individual subunits followed by docking. We make the method available to the scientific community to speed biological research.

607 citations


Proceedings ArticleDOI
TL;DR: Interpolation Consistency Training (ICT) as mentioned in this paper encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolations of the predictions at those points.

354 citations


Journal ArticleDOI
TL;DR: In this article , an adaptive neural network (NN) output feedback optimized control design for a class of strict-feedback nonlinear systems that contain unknown internal dynamics and the states that are immeasurable and constrained within some predefined compact sets is proposed.
Abstract: This article proposes an adaptive neural network (NN) output feedback optimized control design for a class of strict-feedback nonlinear systems that contain unknown internal dynamics and the states that are immeasurable and constrained within some predefined compact sets. NNs are used to approximate the unknown internal dynamics, and an adaptive NN state observer is developed to estimate the immeasurable states. By constructing a barrier type of optimal cost functions for subsystems and employing an observer and the actor-critic architecture, the virtual and actual optimal controllers are developed under the framework of backstepping technique. In addition to ensuring the boundedness of all closed-loop signals, the proposed strategy can also guarantee that system states are confined within some preselected compact sets all the time. This is achieved by means of barrier Lyapunov functions which have been successfully applied to various kinds of nonlinear systems such as strict-feedback and pure-feedback dynamics. Besides, our developed optimal controller requires less conditions on system dynamics than some existing approaches concerning optimal control. The effectiveness of the proposed optimal control approach is eventually validated by numerical as well as practical examples.

217 citations


Journal ArticleDOI
TL;DR: A comprehensive review of the literature on physics-informed neural networks can be found in this article , where the primary goal of the study was to characterize these networks and their related advantages and disadvantages, as well as incorporate publications on a broader range of collocation-based physics informed neural networks.
Abstract: Abstract Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.

216 citations


Journal ArticleDOI
TL;DR: A comprehensive review of deep facial expression recognition (FER) including datasets and algorithms that provide insights into these intrinsic problems can be found in this article , where the authors introduce the available datasets that are widely used in the literature and provide accepted data selection and evaluation principles for these datasets.
Abstract: With the transition of facial expression recognition (FER) from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, deep neural networks have increasingly been leveraged to learn discriminative representations for automatic FER. Recent deep FER systems generally focus on two important issues: overfitting caused by a lack of sufficient training data and expression-unrelated variations, such as illumination, head pose, and identity bias. In this survey, we provide a comprehensive review of deep FER, including datasets and algorithms that provide insights into these intrinsic problems. First, we introduce the available datasets that are widely used in the literature and provide accepted data selection and evaluation principles for these datasets. We then describe the standard pipeline of a deep FER system with the related background knowledge and suggestions for applicable implementations for each stage. For the state-of-the-art in deep FER, we introduce existing novel deep neural networks and related training strategies that are designed for FER based on both static images and dynamic image sequences and discuss their advantages and limitations. Competitive performances and experimental comparisons on widely used benchmarks are also summarized. We then extend our survey to additional related issues and application scenarios. Finally, we review the remaining challenges and corresponding opportunities in this field as well as future directions for the design of robust deep FER systems.

209 citations


Journal ArticleDOI
TL;DR: Physical Neural Networks as discussed by the authors automatically train the functionality of any sequence of real physical systems, directly, using backpropagation, the same technique used for modern deep neural networks, using three diverse physical systems-optical, mechanical, and electrical.
Abstract: Deep neural networks have become a pervasive tool in science and engineering. However, modern deep neural networks' growing energy requirements now increasingly limit their scaling and broader use. We propose a radical alternative for implementing deep neural network models: Physical Neural Networks. We introduce a hybrid physical-digital algorithm called Physics-Aware Training to efficiently train sequences of controllable physical systems to act as deep neural networks. This method automatically trains the functionality of any sequence of real physical systems, directly, using backpropagation, the same technique used for modern deep neural networks. To illustrate their generality, we demonstrate physical neural networks with three diverse physical systems-optical, mechanical, and electrical. Physical neural networks may facilitate unconventional machine learning hardware that is orders of magnitude faster and more energy efficient than conventional electronic processors.

167 citations


Journal ArticleDOI
TL;DR: In this paper, a neural tangent kernel (NTK) was derived for physics-informed neural networks (PINNs) and a novel gradient descent algorithm was proposed to adaptively calibrate the convergence rate of the total training error.

142 citations


Journal ArticleDOI
TL;DR: In this paper , a far-field super-resolution Ghost Imaging (GI) technique was proposed that incorporates the physical model for GI image formation into a deep neural network, and the resulting hybrid neural network does not need to pre-train on any dataset, and allows the reconstruction of a farfield image with the resolution beyond the diffraction limit.
Abstract: Abstract Ghost imaging (GI) facilitates image acquisition under low-light conditions by single-pixel measurements and thus has great potential in applications in various fields ranging from biomedical imaging to remote sensing. However, GI usually requires a large amount of single-pixel samplings in order to reconstruct a high-resolution image, imposing a practical limit for its applications. Here we propose a far-field super-resolution GI technique that incorporates the physical model for GI image formation into a deep neural network. The resulting hybrid neural network does not need to pre-train on any dataset, and allows the reconstruction of a far-field image with the resolution beyond the diffraction limit. Furthermore, the physical model imposes a constraint to the network output, making it effectively interpretable. We experimentally demonstrate the proposed GI technique by imaging a flying drone, and show that it outperforms some other widespread GI techniques in terms of both spatial resolution and sampling ratio. We believe that this study provides a new framework for GI, and paves a way for its practical applications.

139 citations


Journal ArticleDOI
TL;DR: In this article , a neural tangent kernel (NTK) was derived for physics-informed neural networks and shown to converge to a deterministic kernel that stays constant during training in the infinite width limit.

138 citations


Journal ArticleDOI
TL;DR: NequIP as mentioned in this paper is an E(3)-equivariant neural network approach for learning interatomic potentials from ab-initio calculations for molecular dynamics simulations, which achieves state-of-the-art accuracy on a challenging and diverse set of molecules and materials while exhibiting remarkable data efficiency.
Abstract: This work presents Neural Equivariant Interatomic Potentials (NequIP), an E(3)-equivariant neural network approach for learning interatomic potentials from ab-initio calculations for molecular dynamics simulations. While most contemporary symmetry-aware models use invariant convolutions and only act on scalars, NequIP employs E(3)-equivariant convolutions for interactions of geometric tensors, resulting in a more information-rich and faithful representation of atomic environments. The method achieves state-of-the-art accuracy on a challenging and diverse set of molecules and materials while exhibiting remarkable data efficiency. NequIP outperforms existing models with up to three orders of magnitude fewer training data, challenging the widely held belief that deep neural networks require massive training sets. The high data efficiency of the method allows for the construction of accurate potentials using high-order quantum chemical level of theory as reference and enables high-fidelity molecular dynamics simulations over long time scales.

Journal ArticleDOI
TL;DR: In this paper , a programmable diffractive deep neural network based on a multi-layer digital-coding metasurface array is presented, which can handle various deep learning tasks for wave sensing, including image classification, mobile communication coding-decoding and real-time multi-beam focusing.
Abstract: The development of artificial intelligence is typically focused on computer algorithms and integrated circuits. Recently, all-optical diffractive deep neural networks have been created that are based on passive structures and can perform complicated functions designed by computer-based neural networks. However, once a passive diffractive deep neural network architecture is fabricated, its function is fixed. Here we report a programmable diffractive deep neural network that is based on a multi-layer digital-coding metasurface array. Each meta-atom on the metasurfaces is integrated with two amplifier chips and acts an active artificial neuron, providing a dynamic modulation range of 35 dB (from −22 dB to 13 dB). We show that the system, which we term a programmable artificial intelligence machine, can handle various deep learning tasks for wave sensing, including image classification, mobile communication coding–decoding and real-time multi-beam focusing. We also develop a reinforcement learning algorithm for on-site learning and a discrete optimization algorithm for digital coding. Using a multi-layer metasurface array in which each meta-atom of the metasurface acts as an active artificial neuron, a programmable diffractive deep neural network can be created that directly processes electromagnetic waves in free space for wave sensing and wireless communications.

Journal ArticleDOI
TL;DR: In this paper , the authors provide fundamental principles for interpretable ML and dispel common misunderstandings that dilute the importance of this crucial topic, and identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem.
Abstract: Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the “Rashomon set” of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.

Journal ArticleDOI
TL;DR: The proposed paper suggested two phases EfficientNet Convolution Neural Network-based framework for identifying the real or spoofed user sample and the proposed system is trained using Efficient net convolution neural Network on different datasets of spoofed and actual iris biometric samples to discriminate the original and spoofed one.

Journal ArticleDOI
13 Jun 2022
TL;DR: In this paper , a review of machine learning techniques employed in the nanofluid-based renewable energy system, as well as new developments in machine learning research, is presented.
Abstract: Nanofluids have gained significant popularity in the field of sustainable and renewable energy systems. The heat transfer capacity of the working fluid has a huge impact on the efficiency of the renewable energy system. The addition of a small amount of high thermal conductivity solid nanoparticles to a base fluid improves heat transfer. Even though a large amount of research data is available in the literature, some results are contradictory. Many influencing factors, as well as nonlinearity and refutations, make nanofluid research highly challenging and obstruct its potentially valuable uses. On the other hand, data-driven machine learning techniques would be very useful in nanofluid research for forecasting thermophysical features and heat transfer rate, identifying the most influential factors, and assessing the efficiencies of different renewable energy systems. The primary aim of this review study is to look at the features and applications of different machine learning techniques employed in the nanofluid-based renewable energy system, as well as to reveal new developments in machine learning research. A variety of modern machine learning algorithms for nanofluid-based heat transfer studies in renewable and sustainable energy systems are examined, along with their advantages and disadvantages. Artificial neural networks-based model prediction using contemporary commercial software is simple to develop and the most popular. The prognostic capacity may be further improved by combining a marine predator algorithm, genetic algorithm, swarm intelligence optimization, and other intelligent optimization approaches. In addition to the well-known neural networks and fuzzy- and gene-based machine learning techniques, newer ensemble machine learning techniques such as Boosted regression techniques, K-means, K-nearest neighbor (KNN), CatBoost, and XGBoost are gaining popularity due to their improved architectures and adaptabilities to diverse data types. The regularly used neural networks and fuzzy-based algorithms are mostly black-box methods, with the user having little or no understanding of how they function. This is the reason for concern, and ethical artificial intelligence is required.

Journal ArticleDOI
TL;DR: In this paper , a far-field super-resolution Ghost Imaging (GI) technique was proposed that incorporates the physical model for GI image formation into a deep neural network, and the resulting hybrid neural network does not need to pre-train on any dataset, and allows the reconstruction of a farfield image with the resolution beyond the diffraction limit.
Abstract: Abstract Ghost imaging (GI) facilitates image acquisition under low-light conditions by single-pixel measurements and thus has great potential in applications in various fields ranging from biomedical imaging to remote sensing. However, GI usually requires a large amount of single-pixel samplings in order to reconstruct a high-resolution image, imposing a practical limit for its applications. Here we propose a far-field super-resolution GI technique that incorporates the physical model for GI image formation into a deep neural network. The resulting hybrid neural network does not need to pre-train on any dataset, and allows the reconstruction of a far-field image with the resolution beyond the diffraction limit. Furthermore, the physical model imposes a constraint to the network output, making it effectively interpretable. We experimentally demonstrate the proposed GI technique by imaging a flying drone, and show that it outperforms some other widespread GI techniques in terms of both spatial resolution and sampling ratio. We believe that this study provides a new framework for GI, and paves a way for its practical applications.

Journal ArticleDOI
TL;DR: In this paper , a comprehensive review of state-of-the-art robust training methods is presented, all of which are categorized into five groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority.
Abstract: Deep learning has achieved remarkable success in numerous domains with help from large amounts of big data. However, the quality of data labels is a concern because of the lack of high-quality labels in many real-world scenarios. As noisy labels severely degrade the generalization performance of deep neural networks, learning from noisy labels (robust training) is becoming an important task in modern deep learning applications. In this survey, we first describe the problem of learning with label noise from a supervised learning perspective. Next, we provide a comprehensive review of 62 state-of-the-art robust training methods, all of which are categorized into five groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority. Subsequently, we perform an in-depth analysis of noise rate estimation and summarize the typically used evaluation methodology, including public noisy datasets and evaluation metrics. Finally, we present several promising research directions that can serve as a guideline for future studies.

Journal ArticleDOI
01 Nov 2022
TL;DR: In this paper , the authors present a comprehensive survey of graph neural networks for traffic forecasting problems, including graph convolutional and graph attention networks, and a comprehensive list of open data and source codes for each problem.
Abstract: Traffic forecasting is important for the success of intelligent transportation systems. Deep learning models, including convolution neural networks and recurrent neural networks, have been extensively applied in traffic forecasting problems to model spatial and temporal dependencies. In recent years, to model the graph structures in transportation systems as well as contextual information, graph neural networks have been introduced and have achieved state-of-the-art performance in a series of traffic forecasting problems. In this survey, we review the rapidly growing body of research using different graph neural networks, e.g. graph convolutional and graph attention networks, in various traffic forecasting problems, e.g. road traffic flow and speed forecasting, passenger flow forecasting in urban rail transit systems, and demand forecasting in ride-hailing platforms. We also present a comprehensive list of open data and source codes for each problem and identify future research directions. To the best of our knowledge, this paper is the first comprehensive survey that explores the application of graph neural networks for traffic forecasting problems. We have also created a public GitHub repository where the latest papers, open data, and source codes will be updated.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a unified dynamic deep spatio-temporal neural network model based on convolutional neural networks and long short-term memory, termed as (DHSTNet) to simultaneously predict crowd flows in every region of a city.

Journal ArticleDOI
TL;DR: The minimum levitation unit of the maglev vehicle system has been established and an amplitude saturation controller (ASC) is proposed, which can ensure the generation of only saturated unidirectional attractive force, and a neural network-based supervisor controller (NNBSC) is designed.
Abstract: When the electromagnetic suspension (EMS) type maglev vehicle is traveling over a track, the airgap must be maintained between the electromagnet and the track to prevent contact with that track. Because of the open-loop instability of the EMS system, the current must be actively controlled to maintain the target airgap. However, the maglev system suffers from the strong nonlinearity, force saturation, track flexibility, and feedback signals with network time-delay, hence making the controller design even more difficult. In this article, the minimum levitation unit of the maglev vehicle system has been established. An amplitude saturation controller (ASC), which can ensure the generation of only saturated unidirectional attractive force, is thus proposed. The stability and convergence of the closed-loop signals are proven based on the Lyapunov method. Subsequently, ASC is improved based on the radial basis function neural networks, and a neural network-based supervisor controller (NNBSC) is thus designed. The ASC plays the main role in the initial stage. As the neural network learns the control trend, it will gradually transition to the neural network controller. Simulation results are provided to illustrate the specific merit of the NNBSC. The hardware experimental results of a full-scale IoT EMS maglev train are included to validate the effectiveness and robustness of the presented control method as regards to time delay.

Journal ArticleDOI
21 Jan 2022-Sensors
TL;DR: A new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features is proposed, which outperforms recent techniques.
Abstract: After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.

Journal ArticleDOI
TL;DR: In this paper , a programmable diffractive deep neural network based on a multi-layer digital-coding metasurface array is presented, which can handle various deep learning tasks for wave sensing, including image classification, mobile communication coding-decoding and real-time multi-beam focusing.
Abstract: The development of artificial intelligence is typically focused on computer algorithms and integrated circuits. Recently, all-optical diffractive deep neural networks have been created that are based on passive structures and can perform complicated functions designed by computer-based neural networks. However, once a passive diffractive deep neural network architecture is fabricated, its function is fixed. Here we report a programmable diffractive deep neural network that is based on a multi-layer digital-coding metasurface array. Each meta-atom on the metasurfaces is integrated with two amplifier chips and acts an active artificial neuron, providing a dynamic modulation range of 35 dB (from −22 dB to 13 dB). We show that the system, which we term a programmable artificial intelligence machine, can handle various deep learning tasks for wave sensing, including image classification, mobile communication coding–decoding and real-time multi-beam focusing. We also develop a reinforcement learning algorithm for on-site learning and a discrete optimization algorithm for digital coding. Using a multi-layer metasurface array in which each meta-atom of the metasurface acts as an active artificial neuron, a programmable diffractive deep neural network can be created that directly processes electromagnetic waves in free space for wave sensing and wireless communications.

Journal ArticleDOI
TL;DR: In this article , an innovative biomass-based energy system is proposed for power and desalinated water production, which consists of a gasifier, a compressor, a heat exchanger, a gas turbine, a combustion chamber, and a Multi-effect desalination with thermal vapor compression (MED-TVC) unit.

Journal ArticleDOI
TL;DR: In this article, an innovative biomass-based energy system is proposed for power and desalinated water production, which consists of a gasifier, a compressor, a heat exchanger, a gas turbine, a combustion chamber, and a Multi-effect desalination with thermal vapor compression (MED-TVC) unit.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a fine-tuned artificial intelligent model to predict the thermal efficiency and water yield of the solar still, which consists of a traditional artificial neural network model optimized by a meta-heuristic optimizer called humpback whale optimizer.

Journal ArticleDOI
TL;DR: In this article, a multi-task federated learning (MTFL) algorithm is proposed that introduces non-federated batch-normalization (BN) layers into the federated DNN.
Abstract: Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to $5\times$ 5 × when using existing FL optimisation strategies, and with a further $3\times$ 3 × improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead.

Journal ArticleDOI
TL;DR: In this paper , the development of a prediction model by processing the variational parameters with machine learning and studying properties such as characterization, stability, and density of rGO-Fe3O4-TiO2 hybrid nanofluids has provided an unprecedented study in the literature.

Journal ArticleDOI
TL;DR: In this article , Bilinear neural network method is introduced to solve the explicit solution of a generalized breaking soliton equation and some new test functions are constructed by setting generalized activation functions in different artificial network models.
Abstract: In this work, some new test functions are constructed by setting generalized activation functions in different artificial network models. Bilinear neural network method is introduced to solve the explicit solution of a generalized breaking soliton equation. Rogue waves of generalized breaking soliton equation are obtained by symbolic computing technology and displayed intuitively with the help of Maple software.

Journal ArticleDOI
TL;DR: In this article , a reinforcement learning (RL)-based control approach that uses a combination of a deep Q-learning (DQL) algorithm and a metaheuristic Gravitational search algorithm (GSA) is presented.

Journal ArticleDOI
TL;DR: A optimization-driven approach to avoid static and dynamic obstacles present in the environment while simultaneously controlling the robots as commanded by the user is proposed and the neural network, beetle antennae search zeroing neural network (BASZNN), is inspired by the natural behavior of beetles.