scispace - formally typeset
Search or ask a question
Author

Magdy M. A. Salama

Bio: Magdy M. A. Salama is an academic researcher from University of Waterloo. The author has contributed to research in topics: Distributed generation & AC power. The author has an hindex of 67, co-authored 517 publications receiving 20313 citations. Previous affiliations of Magdy M. A. Salama include University of Toronto & National University of Singapore.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel algorithm to accelerate the differential evolution (DE), which employs opposition-based learning (OBL) for population initialization and also for generation jumping and results confirm that the ODE outperforms the original DE and FADE in terms of convergence speed and solution accuracy.
Abstract: Evolutionary algorithms (EAs) are well-known optimization approaches to deal with nonlinear and complex problems. However, these population-based algorithms are computationally expensive due to the slow nature of the evolutionary process. This paper presents a novel algorithm to accelerate the differential evolution (DE). The proposed opposition-based DE (ODE) employs opposition-based learning (OBL) for population initialization and also for generation jumping. In this work, opposite numbers have been utilized to improve the convergence rate of DE. A comprehensive set of 58 complex benchmark functions including a wide range of dimensions is employed for experimental verification. The influence of dimensionality, population size, jumping rate, and various mutation strategies are also investigated. Additionally, the contribution of opposite numbers is empirically verified. We also provide a comparison of ODE to fuzzy adaptive DE (FADE). Experimental results confirm that the ODE outperforms the original DE and FADE in terms of convergence speed and solution accuracy.

1,419 citations

Journal ArticleDOI
TL;DR: In this article, a methodology has been proposed for optimally allocating different types of renewable distributed generation (DG) units in the distribution system so as to minimize annual energy loss.
Abstract: It is widely accepted that renewable energy sources are the key to a sustainable energy supply infrastructure since they are both inexhaustible and nonpolluting. A number of renewable energy technologies are now commercially available, the most notable being wind power, photovoltaic, solar thermal systems, biomass, and various forms of hydraulic power. In this paper, a methodology has been proposed for optimally allocating different types of renewable distributed generation (DG) units in the distribution system so as to minimize annual energy loss. The methodology is based on generating a probabilistic generation-load model that combines all possible operating conditions of the renewable DG units with their probabilities, hence accommodating this model in a deterministic planning problem. The planning problem is formulated as mixed integer nonlinear programming (MINLP), with an objective function for minimizing the system's annual energy losses. The constraints include the voltage limits, the feeders' capacity, the maximum penetration limit, and the discrete size of the available DG units. This proposed technique has been applied to a typical rural distribution system with different scenarios, including all possible combinations of the renewable DG units. The results show that a significant reduction in annual energy losses is achieved for all the proposed scenarios.

1,243 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce a survey of this revolutionary approach of DGs, which will change the way electric power systems operate along with their types and operating technologies, and survey the operational and economical benefits of implementing DGs in the distribution network.

966 citations

Journal ArticleDOI
TL;DR: In this paper, a multiresolution signal decomposition technique is used to detect and localize transient events and furthermore classify different power quality disturbances, which can also be used to distinguish among similar disturbances.
Abstract: The wavelet transform is introduced as a powerful tool for monitoring power quality problems generated due to the dynamic performance of industrial plants. The paper presents a multiresolution signal decomposition technique as an efficient method in analyzing transient events. The multiresolution signal decomposition has the ability to detect and localize transient events and furthermore classify different power quality disturbances. It can also be used to distinguish among similar disturbances.

603 citations

Journal ArticleDOI
TL;DR: In this article, a new heuristic approach for distributed generation (DG) capacity investment planning from the perspective of a distribution company (disco) is obtained through a cost-benefit analysis approach based on a new optimization model.
Abstract: This paper proposes a new heuristic approach for distributed generation (DG) capacity investment planning from the perspective of a distribution company (disco). Optimal sizing and siting decisions for DG capacity is obtained through a cost-benefit analysis approach based on a new optimization model. The model aims to minimize the disco's investment and operating costs as well as payment toward loss compensation. Bus-wise cost-benefit analysis is carried out on an hourly basis for different forecasted peak demand and market price scenarios. This approach arrives at the optimal feasible DG capacity investment plan under competitive electricity market auction as well as fixed bilateral contract scenarios. The proposed heuristic method helps alleviate the use of binary variables in the optimization model thus easing the computational burden substantially.

557 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Abstract: Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.

4,321 citations

Journal ArticleDOI
TL;DR: A novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features.
Abstract: Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.

4,028 citations

Journal ArticleDOI
TL;DR: This work has recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed, without any exposure to distorted images.
Abstract: An important aim of research on the blind image quality assessment (IQA) problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortions as possible. Current state-of-the-art “general purpose” no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores. However we have recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. Thus, it is “completely blind.” The new IQA model, which we call the Natural Image Quality Evaluator (NIQE) is based on the construction of a “quality aware” collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images. A software release is available at http://live.ece.utexas.edu/research/quality/niqe_release.zip.

3,722 citations