scispace - formally typeset
Search or ask a question
Author

Yurong Liu

Other affiliations: Yangzhou University
Bio: Yurong Liu is an academic researcher from King Abdulaziz University. The author has contributed to research in topics: Exponential stability & Artificial neural network. The author has an hindex of 31, co-authored 78 publications receiving 4623 citations. Previous affiliations of Yurong Liu include Yangzhou University.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: This work was supported in part by the Royal Society of the UK, the National Natural Science Foundation of China, and the Alexander von Humboldt Foundation of Germany.

2,404 citations

Journal ArticleDOI
TL;DR: A novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques, trained to identify 10 common rice diseases with much higher accuracy than conventional machine learning model.

593 citations

Journal ArticleDOI
TL;DR: Several sufficient conditions in complex-valued linear matrix inequality form are obtained to ensure the existence, uniqueness and global exponential stability of equilibrium point for the considered neural networks.

174 citations

Journal ArticleDOI
TL;DR: The aim of this paper is to estimate the concentrations of mRNA and protein in such genetic regulatory networks by using the available measurement outputs by designing the desired H ∞ estimator parameters in terms of the solution to a set of matrix inequalities that can be easily solved by the Matlab toolboxes.

164 citations

Journal ArticleDOI
TL;DR: This paper investigates the stability problem for a class of impulsive complex-valued neural networks with both asynchronous time-varying and continuously distributed delays by employing the idea of vector Lyapunov function, M-matrix theory and inequality technique to ensure the global exponential stability of equilibrium point.

154 citations


Cited by
More filters
Book ChapterDOI
01 Jan 2015

3,828 citations

Journal ArticleDOI
01 Nov 2018-Heliyon
TL;DR: The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems and proposed feedforwardand feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance.

1,471 citations

Journal ArticleDOI
TL;DR: Deep Convolutional Neural Networks (CNNs) as mentioned in this paper are a special type of Neural Networks, which has shown exemplary performance on several competitions related to Computer Vision and Image Processing.
Abstract: Deep Convolutional Neural Network (CNN) is a special type of Neural Networks, which has shown exemplary performance on several competitions related to Computer Vision and Image Processing. Some of the exciting application areas of CNN include Image Classification and Segmentation, Object Detection, Video Processing, Natural Language Processing, and Speech Recognition. The powerful learning ability of deep CNN is primarily due to the use of multiple feature extraction stages that can automatically learn representations from the data. The availability of a large amount of data and improvement in the hardware technology has accelerated the research in CNNs, and recently interesting deep CNN architectures have been reported. Several inspiring ideas to bring advancements in CNNs have been explored, such as the use of different activation and loss functions, parameter optimization, regularization, and architectural innovations. However, the significant improvement in the representational capacity of the deep CNN is achieved through architectural innovations. Notably, the ideas of exploiting spatial and channel information, depth and width of architecture, and multi-path information processing have gained substantial attention. Similarly, the idea of using a block of layers as a structural unit is also gaining popularity. This survey thus focuses on the intrinsic taxonomy present in the recently reported deep CNN architectures and, consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature-map exploitation, channel boosting, and attention. Additionally, the elementary understanding of CNN components, current challenges, and applications of CNN are also provided.

1,328 citations

01 Jan 2005
TL;DR: In this paper, a number of quantized feedback design problems for linear systems were studied and the authors showed that the classical sector bound approach is non-conservative for studying these design problems.
Abstract: This paper studies a number of quantized feedback design problems for linear systems. We consider the case where quantizers are static (memoryless). The common aim of these design problems is to stabilize the given system or to achieve certain performance with the coarsest quantization density. Our main discovery is that the classical sector bound approach is nonconservative for studying these design problems. Consequently, we are able to convert many quantized feedback design problems to well-known robust control problems with sector bound uncertainties. In particular, we derive the coarsest quantization densities for stabilization for multiple-input-multiple-output systems in both state feedback and output feedback cases; and we also derive conditions for quantized feedback control for quadratic cost and H/sub /spl infin// performances.

1,292 citations