scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 1997"


Journal ArticleDOI
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

72,897 citations


Journal ArticleDOI
TL;DR: A hybrid neural-network for human face recognition which compares favourably with other methods and analyzes the computational complexity and discusses how new classes could be added to the trained recognizer.
Abstract: We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

2,954 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: A decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets is presented, and the feasibility of the approach on a face detection problem that involves a data set of 50,000 data points is demonstrated.
Abstract: We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.

2,764 citations


Journal ArticleDOI
TL;DR: It is shown that a new unsupervised learning algorithm based on information maximization, a nonlinear "infomax" network, when applied to an ensemble of natural scenes produces sets of visual filters that are localized and oriented.

2,354 citations


Book
01 Jan 1997
TL;DR: An Introduction to Nueral Networks will be warmly welcomed by a wide readership seeking an authoritative treatment of this key subject without an intimidating level of mathematics in the presentation.
Abstract: From the Publisher: An Introduction to Nueral Networks will be warmly welcomed by a wide readership seeking an authoritative treatment of this key subject without an intimidating level of mathematics in the presentation.

2,135 citations


Journal ArticleDOI
TL;DR: It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than other neural network models based on McCulloch Pitts neurons and sigmoidal gates.

1,731 citations


Journal ArticleDOI
TL;DR: The results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system, and the SV approach is thus not only theoretically well-founded but also superior in a practical application.
Abstract: The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application.

1,385 citations


Book
01 Jan 1997
TL;DR: Fuzzy and Neural Approaches in Engineering presents a detailed examination of the fundamentals of fuzzy systems and neural networks and then joins them synergistically - combining the feature extraction and modeling capabilities of the neural network with the representation capabilities of fuzzy Systems.
Abstract: From the Publisher: Fuzzy and Neural Approaches in Engineering presents a detailed examination of the fundamentals of fuzzy systems and neural networks and then joins them synergistically - combining the feature extraction and modeling capabilities of the neural network with the representation capabilities of fuzzy systems. Exploring the value of relating genetic algorithms and expert systems to fuzzy and neural technologies, this forward-thinking text highlights an entire range of dynamic possibilities within soft computing. With examples of specifically designed to illuminate key concepts and overcome the obstacles of notation and overly mathematical presentations often encountered in other sources, plus tables, figures, and an up-to-date bibliography, this unique work is both an important reference and a practical guide to neural networks and fuzzy systems.

1,349 citations


Journal ArticleDOI
TL;DR: A new approach to shape recognition based on a virtually infinite family of binary features (queries) of the image data, designed to accommodate prior information about shape invariance and regularity, and a comparison with artificial neural networks methods is presented.
Abstract: We explore a new approach to shape recognition based on a virtually infinite family of binary features (queries) of the image data, designed to accommodate prior information about shape invariance and regularity. Each query corresponds to a spatial arrangement of several local topographic codes (or tags), which are in themselves too primitive and common to be informative about shape. All the discriminating power derives from relative angles and distances among the tags. The important attributes of the queries are a natural partial ordering corresponding to increasing structure and complexity; semi-invariance, meaning that most shapes of a given class will answer the same way to two queries that are successive in the ordering; and stability, since the queries are not based on distinguished points and substructures. No classifier based on the full feature set can be evaluated, and it is impossible to determine a priori which arrangements are informative. Our approach is to select informative features and build tree classifiers at the same time by inductive learning. In effect, each tree provides an approximation to the full posterior where the features chosen depend on the branch that is traversed. Due to the number and nature of the queries, standard decision tree construction based on a fixed-length feature vector is not feasible. Instead we entertain only a small random sample of queries at each node, constrain their complexity to increase with tree depth, and grow multiple trees. The terminal nodes are labeled by estimates of the corresponding posterior distribution over shape classes. An image is classified by sending it down every tree and aggregating the resulting distributions. The method is applied to classifying handwritten digits and synthetic linear and nonlinear deformations of three hundred L AT E X symbols. Stateof-the-art error rates are achieved on the National Institute of Standards and Technology database of digits. The principal goal of the experiments on L AT E X symbols is to analyze invariance, generalization error and related issues, and a comparison with artificial neural networks methods is presented in this context.

1,214 citations


Journal ArticleDOI
TL;DR: The feed-forward back-propagation multi-layer perceptron (MLP) is the type of neural network most commonly encountered in remote sensing and is used in many of the papers in this special issue.
Abstract: Over the past decade there have been considerable increases in both the quantity of remotely sensed data available and the use of neural networks. These increases have largely taken place in parallel, and it is only recently that several researchers have begun to apply neural networks to remotely sensed data. This paper introduces this special issue which is concerned specifically with the use of neural networks in remote sensing. The feed-forward back-propagation multi-layer perceptron (MLP) is the type of neural network most commonly encountered in remote sensing and is used in many of the papers in this special issue. The basic structure of the MLP algorithm is described in some detail while some other types of neural network are mentioned. The most common applications of neural networks in remote sensing are considered, particularly those concerned with the classification of land and clouds, and recent developments in these areas are described. Finally, the application of neural networks to m...

910 citations


Journal ArticleDOI
TL;DR: The experimental results show that EPNet can produce very compact ANNs with good generalization ability in comparison with other algorithms, and has been tested on a number of benchmark problems in machine learning and ANNs.
Abstract: This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP). Unlike most previous studies on evolving ANN's, this paper puts its emphasis on evolving ANN's behaviors. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviors. Close behavioral links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANN's is encouraged by preferring node/connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems, the Australian credit card assessment problem, and the Mackey-Glass time series prediction problem. The experimental results show that EPNet can produce very compact ANNs with good generalization ability in comparison with other algorithms.

Book
18 Dec 1997
TL;DR: Pattern association memory autoassociation memory competitive networks, including self-organizing maps error-correcting networks - perceptrons, backpropagation of error in multilayer networks, and reinforcement learning algorithms hippocampus and memory pattern association in the brain - amygdala and orbitofrontal cortex cortical networks for invariant pattern recognition motor systems.
Abstract: Pattern association memory autoassociation memory competitive networks, including self-organizing maps error-correcting networks - perceptrons, backpropagation of error in multilayer networks, and reinforcement learning algorithms hippocampus and memory pattern association in the brain - amygdala and orbitofrontal cortex cortical networks for invariant pattern recognition motor systems - cerebellum and basal ganglia cerebral neocortex. Appendix 1: introduction to linear algebra for neural networks. Appendix 2: Information theory. Appendix 3: Pattern associators. Appendix 4: Autoassociators. Appendix 5: Recurrent dynamics.

Book
01 Jan 1997
TL;DR: The authors' informed analysis of practical neuro-fuzzy applications will be an asset to industrial practitioners using fuzzy technology and neural networks for control systems, data analysis and optimization tasks.
Abstract: From the Publisher: Foundations of Neuro-Fuzzy Systems reflects the current trend in intelligent systems research towards the integration of neural networks and fuzzy technology. The authors demonstrate how a combination of both techniques enhances the performance of control, decision-making and data analysis systems. Smarter and more applicable structures result from marrying the learning capability of the neural network with the transparency and interpretability of the rule-based fuzzy system. Foundations of Neuro-Fuzzy Systems highlights the advantages of integration making it a valuable resource for graduate students and researchers in control engineering, computer science and applied mathematics. The authors' informed analysis of practical neuro-fuzzy applications will be an asset to industrial practitioners using fuzzy technology and neural networks for control systems, data analysis and optimization tasks.

Journal ArticleDOI
TL;DR: Algorithms for wavelet network construction are proposed for the purpose of nonparametric regression estimation and particular attentions are paid to sparse training data so that problems of large dimension can be better handled.
Abstract: Wavelet networks are a class of neural networks consisting of wavelets. In this paper, algorithms for wavelet network construction are proposed for the purpose of nonparametric regression estimation. Particular attentions are paid to sparse training data so that problems of large dimension can be better handled. A numerical example on nonlinear system identification is presented for illustration.

Journal ArticleDOI
TL;DR: A self-organized neural network performing two tasks: vector quantization of the submanifold in the data set (input space) and nonlinear projection of these quantizing vectors toward an output space, providing a revealing unfolding of theSub manifold.
Abstract: We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

Journal ArticleDOI
TL;DR: A local linear approach to dimension reduction that provides accurate representations and is fast to compute is developed and it is shown that the local linear techniques outperform neural network implementations.
Abstract: Reducing or eliminating statistical redundancy between the components of high-dimensional vector data enables a lower-dimensional representation without significant loss of information. Recognizing the limitations of principal component analysis (PCA), researchers in the statistics and neural network communities have developed nonlinear extensions of PCA. This article develops a local linear approach to dimension reduction that provides accurate representations and is fast to compute. We exercise the algorithms on speech and image data, and compare performance with PCA and with neural network implementations of nonlinear PCA. We find that both nonlinear techniques can provide more accurate representations than PCA and show that the local linear techniques outperform neural network implementations.

Book
01 Jan 1997
TL;DR: This new text has been designed to present the concepts of artificial neural networks in a concise and logical manner for computer engineering students.
Abstract: From the Publisher: This new text has been designed to present the concepts of artificial neural networks in a concise and logical manner for your computer engineering students.

Journal ArticleDOI
TL;DR: The paper demonstrates a successful application of PDBNN to face recognition applications on two public (FERET and ORL) and one in-house (SCR) databases and experimental results on three different databases such as recognition accuracies as well as false rejection and false acceptance rates are elaborated.
Abstract: This paper proposes a face recognition system, based on probabilistic decision-based neural networks (PDBNN). With technological advance on microelectronic and vision system, high performance automatic techniques on biometric recognition are now becoming economically feasible. Among all the biometric identification methods, face recognition has attracted much attention in recent years because it has potential to be most nonintrusive and user-friendly. The PDBNN face recognition system consists of three modules: First, a face detector finds the location of a human face in an image. Then an eye localizer determines the positions of both eyes in order to generate meaningful feature vectors. The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth (eye-glasses will be allowed). Lastly, the third module is a face recognizer. The PDBNN can be effectively applied to all the three modules. It adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The paper demonstrates a successful application of PDBNN to face recognition applications on two public (FERET and ORL) and one in-house (SCR) databases. Regarding the performance, experimental results on three different databases such as recognition accuracies as well as false rejection and false acceptance rates are elaborated. As to the processing speed, the whole recognition process (including PDBNN processing for eye localization, feature extraction, and classification) consumes approximately one second on Sparc10, without using hardware accelerator or co-processor.

Journal ArticleDOI
TL;DR: It is shown how data normalization affects the performance error of parameter estimators trained to predict the value of several variables of a PWR nuclear power plant.
Abstract: Recent advances in artificial intelligence have allowed the application of such technologies in real industrial problems. We have studied the application of backpropagation neural networks to several problems of estimation and identification in nuclear power plants. These problems often have been reported to be very time-consuming in the training phase. Among the different approaches suggested to ease the backpropagation training process, input data pretreatment has been pointed out, although no specific procedure has been proposed. We have found that input data normalization with certain criteria, prior to a training process, is crucial to obtain good results as well as to fasten significantly the calculations. This paper shows how data normalization affects the performance error of parameter estimators trained to predict the value of several variables of a PWR nuclear power plant. The criteria needed to accomplish such data normalization are also described.

Journal ArticleDOI
TL;DR: It is shown that neural networks can, in fact, represent and classify structured patterns and all the supervised networks developed for the classification of sequences can, on the whole, be generalized to structures.
Abstract: Standard neural networks and statistical methods are usually believed to be inadequate when dealing with complex structures because of their feature-based approach. In fact, feature-based approaches usually fail to give satisfactory solutions because of the sensitivity of the approach to the a priori selection of the features, and the incapacity to represent any specific information on the relationships among the components of the structures. However, we show that neural networks can, in fact, represent and classify structured patterns. The key idea underpinning our approach is the use of the so called "generalized recursive neuron", which is essentially a generalization to structures of a recurrent neuron. By using generalized recursive neurons, all the supervised networks developed for the classification of sequences, such as backpropagation through time networks, real-time recurrent networks, simple recurrent networks, recurrent cascade correlation networks, and neural trees can, on the whole, be generalized to structures. The results obtained by some of the above networks (with generalized recursive neurons) on the classification of logic terms are presented.

Book
01 Jan 1997
TL;DR: Detailed case studies for each of the major neural network approaches and architectures with the theories are presented, accompanied with complete computer codes and the corresponding computed results.
Abstract: The book should serve as a text for a university graduate course or for an advanced undergraduate course on neural networks in engineering and computer science departments. It should also serve as a self-study course for engineers and computer scientists in the industry. Covering major neural network approaches and architectures with the theories, this text presents detailed case studies for each of the approaches, accompanied with complete computer codes and the corresponding computed results. The case studies are designed to allow easy comparison of network performance to illustrate strengths and weaknesses of the different networks.

Proceedings ArticleDOI
24 Sep 1997
TL;DR: The SVM is implemented and tested on a database of chaotic time series previously used to compare the performances of different approximation techniques, including polynomial and rational approximation, localPolynomial techniques, radial basis functions, and neural networks; the SVM performs better than the other approaches.
Abstract: A novel method for regression has been recently proposed by Vapnik et al. (1995, 1996). The technique, called support vector machine (SVM), is very well founded from the mathematical point of view and seems to provide a new insight in function approximation. We implemented the SVM and tested it on a database of chaotic time series previously used to compare the performances of different approximation techniques, including polynomial and rational approximation, local polynomial techniques, radial basis functions, and neural networks. The SVM performs better than the other approaches. We also study, for a particular time series, the variability in performance with respect to the few free parameters of SVM.

Journal ArticleDOI
TL;DR: This work presents a new Volterra-based predistorter, which utilizes the indirect learning architecture to circumvent a classical problem associated with predistorters, namely that the desired output is not known in advance.
Abstract: Nonlinear compensation techniques are becoming increasingly important. We present a new Volterra-based predistorter, which utilizes the indirect learning architecture to circumvent a classical problem associated with predistorters, namely that the desired output is not known in advance. We utilize the indirect learning architecture and the recursive least square (RLS) algorithm. Specifically, we propose an indirect Volterra series model predistorter which is independent of a specific nonlinear model for the system to be compensated. Both 16-phase shift keying (PSK) and 16-quadrature amplitude modulation (QAM) are used to demonstrate the efficacy of the new approach.

Journal ArticleDOI
TL;DR: The algorithm combines the growth criterion of the resource-allocating network of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output to lead toward a minimal topology for the RBFNN.
Abstract: This article presents a sequential learning algorithm for function approximation and time-series prediction using a minimal radial basis function neural network (RBFNN). The algorithm combines the growth criterion of the resource-allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RBFNN. The performance of the algorithm is compared with RAN and the enhanced RAN algorithm of Kadirkamanathan and Niranjan (1993) for the following benchmark problems: (1) hearta from the benchmark problems database PROBEN1, (2) Hermite polynomial, and (3) Mackey-Glass chaotic time series. For these problems, the proposed algorithm is shown to realize RBFNNs with far fewer hidden neurons with better or same accuracy.

Journal ArticleDOI
Sherif Hashem1
TL;DR: This paper extends the idea of optimal linear combinations (OLCs) of neural networks and presents two algorithms for selecting the component networks for the combination to improve the generalization ability of OLCs, and demonstrates significant improvements in model accuracy.

Proceedings Article
01 Dec 1997
TL;DR: A backpropagation neural network called NNID (Neural Network Intrusion Detector) was trained in the identification task and tested experimentally on a system of 10 users, suggesting that learning user profiles is an effective way for detecting intrusions.
Abstract: With the rapid expansion of computer networks during the past few years, security has become a crucial issue for modern computer systems. A good way to detect illegitimate use is through monitoring unusual user activity. Methods of intrusion detection based on hand-coded rule sets or predicting commands on-line are laborous to build or not very reliable. This paper proposes a new way of applying neural networks to detect intrusions. We believe that a user leaves a 'print' when using the system; a neural network can be used to learn this print and identify each user much like detectives use thumbprints to place people at crime scenes. If a user's behavior does not match his/her print, the system administrator can be alerted of a possible security breech. A backpropagation neural network called NNID (Neural Network Intrusion Detector) was trained in the identification task and tested experimentally on a system of 10 users. The system was 96% accurate in detecting unusual activity, with 7% false alarm rate. These results suggest that learning user profiles is an effective way for detecting intrusions.

Journal ArticleDOI
TL;DR: This interpretation of neural networks is built with fuzzy rules using a new fuzzy logic operator which is defined after introducing the concept of f-duality and offers an automated knowledge acquisition procedure.
Abstract: Artificial neural networks are efficient computing models which have shown their strengths in solving hard problems in artificial intelligence. They have also been shown to be universal approximators. Notwithstanding, one of the major criticisms is their being black boxes, since no satisfactory explanation of their behavior has been offered. In this paper, we provide such an interpretation of neural networks so that they will no longer be seen as black boxes. This is stated after establishing the equality between a certain class of neural nets and fuzzy rule-based systems. This interpretation is built with fuzzy rules using a new fuzzy logic operator which is defined after introducing the concept of f-duality. In addition, this interpretation offers an automated knowledge acquisition procedure.

Journal ArticleDOI
TL;DR: In this paper, a neural network technique was used for rainfall runoff modeling. But, the results suggest that the neural network shows considerable promise in the context of rainfall-runoff modelling but, like all such models, has variable results.

Journal ArticleDOI
TL;DR: A case is made in this paper that such approximate input-output models warrant a detailed study in their own right in view of their mathematical tractability as well as their success in simulation studies.
Abstract: The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

Journal ArticleDOI
01 Apr 1997
TL;DR: It is constructively proved that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines, raising the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.
Abstract: Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=/spl Psi/(u(t-n/sub u/), ..., u(t-1), u(t), y(t-n/sub y/), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n/sub u/ and n/sub y/ are the input and output order, and the function /spl Psi/ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.