scispace - formally typeset
Search or ask a question
Author

José de Jesús Rubio

Other affiliations: CINVESTAV, UAM Azcapotzalco
Bio: José de Jesús Rubio is an academic researcher from Instituto Politécnico Nacional. The author has contributed to research in topics: Artificial neural network & Fuzzy logic. The author has an hindex of 28, co-authored 141 publications receiving 2466 citations. Previous affiliations of José de Jesús Rubio include CINVESTAV & UAM Azcapotzalco.


Papers
More filters
Journal ArticleDOI
TL;DR: A theorem based on Lyapunov theory is proposed to prove that if a linearized controlled process is stable, then nonlinear process states are uniformly stable.
Abstract: In this research, a robust feedback linearization technique is studied for nonlinear processes control. The main contributions are described as follows: 1) Theory says that if a linearized controlled process is stable, then nonlinear process states are asymptotically stable, it is not satisfied in applications because some states converge to small values; therefore, a theorem based on Lyapunov theory is proposed to prove that if a linearized controlled process is stable, then nonlinear process states are uniformly stable. 2) Theory says that all the main and crossed states feedbacks should be considered for the nonlinear processes regulation, it makes more difficult to find the controller gains; consequently, only the main states feedbacks are utilized to obtain a satisfactory result in applications. This introduced strategy is applied in a fuel cell and a manipulator.

126 citations

Journal ArticleDOI
TL;DR: Extended Kalman filter is applied to train state-space recurrent neural networks for nonlinear system identification and Lyapunov method is used to prove that theKalman filter training is stable.

108 citations

Journal ArticleDOI
TL;DR: A modified Levenberg–Marquardt algorithm is proposed for the artificial neural network learning containing the training and testing stages and error stability and weights boundedness are assured based on the Lyapunov technique.
Abstract: The Levenberg–Marquardt and Newton are two algorithms that use the Hessian for the artificial neural network learning. In this article, we propose a modified Levenberg–Marquardt algorithm for the artificial neural network learning containing the training and testing stages. The modified Levenberg–Marquardt algorithm is based on the Levenberg–Marquardt and Newton algorithms but with the following two differences to assure the error stability and weights boundedness: 1) there is a singularity point in the learning rates of the Levenberg–Marquardt and Newton algorithms, while there is not a singularity point in the learning rate of the modified Levenberg–Marquardt algorithm and 2) the Levenberg–Marquardt and Newton algorithms have three different learning rates, while the modified Levenberg–Marquardt algorithm only has one learning rate. The error stability and weights boundedness of the modified Levenberg–Marquardt algorithm are assured based on the Lyapunov technique. We compare the artificial neural network learning with the modified Levenberg–Marquardt, Levenberg–Marquardt, Newton, and stable gradient algorithms for the learning of the electric and brain signals data set.

99 citations

Journal ArticleDOI
José de Jesús Rubio1, Wen Yu1
TL;DR: The gradient algorithm for updating the weights of the delayed neural networks is stable to any bounded uncertainties and the conditions for passivity, asymptotic stability and uniform stability are established.
Abstract: In this brief, the identification problem for time-delay nonlinear system is discussed. We use a delayed dynamic neural network to do on-line identification. This neural network has dynamic series-parallel structure. The stability conditions of on-line identification are derived by Lyapunov-Krasovskii approach, which are described by linear matrix inequality. The conditions for passivity, asymptotic stability and uniform stability are established in some senses. We conclude that the gradient algorithm for updating the weights of the delayed neural networks is stable to any bounded uncertainties

94 citations

Journal ArticleDOI
TL;DR: In this study, the trajectory tracking problem of robotic arms is considered and two novel modified optimal controllers based on neural networks are proposed by means of a Lyapunov-like analysis.
Abstract: In this study, the trajectory tracking problem of robotic arms is considered. To solve this problem, two novel modified optimal controllers based on neural networks are proposed. The uniform stability of both the tracking error and approximation error for the aforementioned controllers is guaranteed by means of a Lyapunov-like analysis. The effectiveness of the proposed controllers is verified by simulations.

85 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 Nov 1981
TL;DR: In this paper, the authors studied the effect of local derivatives on the detection of intensity edges in images, where the local difference of intensities is computed for each pixel in the image.
Abstract: Most of the signal processing that we will study in this course involves local operations on a signal, namely transforming the signal by applying linear combinations of values in the neighborhood of each sample point. You are familiar with such operations from Calculus, namely, taking derivatives and you are also familiar with this from optics namely blurring a signal. We will be looking at sampled signals only. Let's start with a few basic examples. Local difference Suppose we have a 1D image and we take the local difference of intensities, DI(x) = 1 2 (I(x + 1) − I(x − 1)) which give a discrete approximation to a partial derivative. (We compute this for each x in the image.) What is the effect of such a transformation? One key idea is that such a derivative would be useful for marking positions where the intensity changes. Such a change is called an edge. It is important to detect edges in images because they often mark locations at which object properties change. These can include changes in illumination along a surface due to a shadow boundary, or a material (pigment) change, or a change in depth as when one object ends and another begins. The computational problem of finding intensity edges in images is called edge detection. We could look for positions at which DI(x) has a large negative or positive value. Large positive values indicate an edge that goes from low to high intensity, and large negative values indicate an edge that goes from high to low intensity. Example Suppose the image consists of a single (slightly sloped) edge:

1,829 citations

01 Jan 2005
TL;DR: In this paper, a number of quantized feedback design problems for linear systems were studied and the authors showed that the classical sector bound approach is non-conservative for studying these design problems.
Abstract: This paper studies a number of quantized feedback design problems for linear systems. We consider the case where quantizers are static (memoryless). The common aim of these design problems is to stabilize the given system or to achieve certain performance with the coarsest quantization density. Our main discovery is that the classical sector bound approach is nonconservative for studying these design problems. Consequently, we are able to convert many quantized feedback design problems to well-known robust control problems with sector bound uncertainties. In particular, we derive the coarsest quantization densities for stabilization for multiple-input-multiple-output systems in both state feedback and output feedback cases; and we also derive conditions for quantized feedback control for quadratic cost and H/sub /spl infin// performances.

1,292 citations