scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Applied Mathematics in 2014"


Journal ArticleDOI
TL;DR: This paper examines the forecasting performance of ARIMA and artificial neural networks model with published stock data obtained from New York Stock Exchange to reveal the superiority of Neural networks model over ARimA model.
Abstract: This paper examines the forecasting performance of ARIMA and artificial neural networks model with published stock data obtained from New York Stock Exchange. The empirical results obtained reveal the superiority of neural networks model over ARIMA model. The findings further resolve and clarify contradictory opinions reported in literature over the superiority of neural networks and ARIMA model and vice versa.

381 citations


Journal ArticleDOI
TL;DR: This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control system design, and integration of wind power in a smart grid.
Abstract: Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control system design, and integration of wind power in a smart grid.

157 citations


Journal ArticleDOI
TL;DR: This paper investigates and reports the use of random forest machine learning algorithm in classification of phishing attacks, with the major objective of developing an improved phishing email classifier with better prediction accuracy and fewer numbers of features.
Abstract: Phishing is one of the major challenges faced by the world of e-commerce today. Thanks to phishing attacks, billions of dollars have been lost by many companies and individuals. In 2012, an online report put the loss due to phishing attack at about $1.5 billion. This global impact of phishing attacks will continue to be on the increase and thus requires more efficient phishing detection techniques to curb the menace. This paper investigates and reports the use of random forest machine learning algorithm in classification of phishing attacks, with the major objective of developing an improved phishing email classifier with better prediction accuracy and fewer numbers of features. From a dataset consisting of 2000 phishing and ham emails, a set of prominent phishing email features (identified from the literature) were extracted and used by the machine learning algorithm with a resulting classification accuracy of 99.7% and low false negative (FN) and false positive (FP) rates.

135 citations


Journal ArticleDOI
TL;DR: The classical soft sets are extended to hesitant fuzzy soft sets which are combined by the soft sets and hesitant fuzzy sets and the basic properties such as DeMorgan’s laws and the relevant laws of hesitant fuzzysoft sets are proved.
Abstract: Molodtsov’s soft set theory is a newly emerging mathematical tool to handle uncertainty. However, the classical soft sets are not appropriate to deal with imprecise and fuzzy parameters. This paper aims to extend the classical soft sets to hesitant fuzzy soft sets which are combined by the soft sets and hesitant fuzzy sets. Then, the complement, “AND”, “OR”, union and intersection operations are defined on hesitant fuzzy soft sets. The basic properties such as DeMorgan’s laws and the relevant laws of hesitant fuzzy soft sets are proved. Finally, with the help of level soft set, the hesitant fuzzy soft sets are applied to a decision making problem and the effectiveness is proved by a numerical example.

101 citations


Journal ArticleDOI
TL;DR: This paper applies a linear support vector machine (SVM) to detect Android malware and compares the malware detection performance of SVM with that of other machine learning classifiers, and shows that the SVM outperforms other machineLearning classifiers.
Abstract: Current many Internet of Things (IoT) services are monitored and controlled through smartphone applications. By combining IoT with smartphones, many convenient IoT services have been provided to users. However, there are adverse underlying effects in such services including invasion of privacy and information leakage. In most cases, mobile devices have become cluttered with important personal user information as various services and contents are provided through them. Accordingly, attackers are expanding the scope of their attacks beyond the existing PC and Internet environment into mobile devices. In this paper, we apply a linear support vector machine (SVM) to detect Android malware and compare the malware detection performance of SVM with that of other machine learning classifiers. Through experimental validation, we show that the SVM outperforms other machine learning classifiers.

100 citations


Journal ArticleDOI
TL;DR: To better deal with imprecise and uncertain information in decision making, the definition of linguistic intuitionistic fuzzy sets (LIFSs) is introduced, which is characterized by a linguistic membership degree and a linguistic nonmembership degree.
Abstract: To better deal with imprecise and uncertain information in decision making, the definition of linguistic intuitionistic fuzzy sets (LIFSs) is introduced, which is characterized by a linguistic membership degree and a linguistic nonmembership degree, respectively. To compare any two linguistic intuitionistic fuzzy values (LIFVs), the score function and accuracy function are defined. Then, based on -norm and -conorm, several aggregation operators are proposed to aggregate linguistic intuitionistic fuzzy information, which avoid the limitations in exiting linguistic operation. In addition, the desired properties of these linguistic intuitionistic fuzzy aggregation operators are discussed. Finally, a numerical example is provided to illustrate the efficiency of the proposed method in multiple attribute group decision making (MAGDM).

96 citations


Journal ArticleDOI
TL;DR: The concepts of -fuzzifying convexity preserving functions, substructures, disjoint sums, bases, subbases, joins, product, and quotient structures are presented and their fundamental properties are obtained.
Abstract: A new approach to the fuzzification of convex structures is introduced. It is also called an -fuzzifying convex structure. In the definition of -fuzzifying convex structure, each subset can be regarded as a convex set to some degree. An -fuzzifying convex structure can be characterized by means of its -fuzzifying closure operator. An -fuzzifying convex structure and its -fuzzifying closure operator are one-to-one corresponding. The concepts of -fuzzifying convexity preserving functions, substructures, disjoint sums, bases, subbases, joins, product, and quotient structures are presented and their fundamental properties are obtained in -fuzzifying convex structure.

89 citations


Journal ArticleDOI
TL;DR: This paper reviews fair optimization models and methods applied to systems that are based on some kind of network of connections and dependencies, especially, fair optimization methods for the location problems and for the resource allocation problems in communication networks.
Abstract: Optimization models related to designing and operating complex systems are mainly focused on some efficiency metrics such as response time, queue length, throughput, and cost. However, in systems which serve many entities there is also a need for respecting fairness: each system entity ought to be provided with an adequate share of the system’s services. Still, due to system operations-dependant constraints, fair treatment of the entities does not directly imply that each of them is assigned equal amount of the services. That leads to concepts of fair optimization expressed by the equitable models that represent inequality averse optimization rather than strict inequality minimization; a particular widely applied example of that concept is the so-called lexicographic maximin optimization (max-min fairness). The fair optimization methodology delivers a variety of techniques to generate fair and efficient solutions. This paper reviews fair optimization models and methods applied to systems that are based on some kind of network of connections and dependencies, especially, fair optimization methods for the location problems and for the resource allocation problems in communication networks.

87 citations


Journal ArticleDOI
TL;DR: A decision tree model is proposed for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia which identified slope angle, distance from drainage, surface area, slope aspect, and cross curvature as most important factors.
Abstract: This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID), Exhaustive CHAID, Classification and Regression Tree (CRT), and Quick-Unbiased-Efficient Statistical Tree (QUEST). Twenty-one factors were extracted using digital elevation models (DEMs) and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0%) compared to CHAID (81.9%), CRT (75.6%), and QUEST (74.0%) model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.

73 citations


Journal ArticleDOI
TL;DR: A fall-detection algorithm that combines a simple threshold method and hidden Markov model (HMM) using 3-axis acceleration and the combination of the simplereshold method and HMM reduced the complexity of the hardware and the proposed algorithm exhibited higher accuracy than that of thesimple threshold method.
Abstract: Falls are a serious medical and social problem among the elderly. This has led to the development of automatic fall-detection systems. To detect falls, a fall-detection algorithm that combines a simple threshold method and hidden Markov model (HMM) using 3-axis acceleration is proposed. To apply the proposed fall-detection algorithm and detect falls, a wearable fall-detection device has been designed and produced. Several fall-feature parameters of 3-axis acceleration are introduced and applied to a simple threshold method. Possible falls are chosen through the simple threshold and are applied to two types of HMM to distinguish between a fall and an activity of daily living (ADL). The results using the simple threshold, HMM, and combination of the simple method and HMM were compared and analyzed. The combination of the simple threshold method and HMM reduced the complexity of the hardware and the proposed algorithm exhibited higher accuracy than that of the simple threshold method.

72 citations


Journal ArticleDOI
TL;DR: This work proposes a simple, yet effective method to achieve the muscle artifact removal from single-channel EEG, by combining ensemble empirical mode decomposition (EEMD) with multiset canonical correlation analysis (MCCA).
Abstract: Electroencephalogram (EEG) recordings are often contaminated with muscle artifacts. This disturbing muscular activity strongly affects the visual analysis of EEG and impairs the results of EEG signal processing such as brain connectivity analysis. If multichannel EEG recordings are available, then there exist a considerable range of methods which can remove or to some extent suppress the distorting effect of such artifacts. Yet to our knowledge, there is no existing means to remove muscle artifacts from single-channel EEG recordings. Moreover, considering the recently increasing need for biomedical signal processing in ambulatory situations, it is crucially important to develop single-channel techniques. In this work, we propose a simple, yet effective method to achieve the muscle artifact removal from single-channel EEG, by combining ensemble empirical mode decomposition (EEMD) with multiset canonical correlation analysis (MCCA). We demonstrate the performance of the proposed method through numerical simulations and application to real EEG recordings contaminated with muscle artifacts. The proposed method can successfully remove muscle artifacts without altering the recorded underlying EEG activity. It is a promising tool for real-world biomedical signal processing applications.

Journal ArticleDOI
TL;DR: In this paper, a technique called counterexample-preserving reduction (CePRe) is proposed to reduce the length of the LTL model under model checking.
Abstract: The cost of LTL model checking is highly sensitive to the length of the formula under verification. We observe that, under some specific conditions, the input LTL formula can be reduced to an easier-to-handle one before model checking. In such reduction, these two formulae need not to be logically equivalent, but they share the same counterexample set w.r.t the model. In the case that the model is symbolically represented, the condition enabling such reduction can be detected with a lightweight effort (e.g., with SAT-solving). In this paper, we tentatively name such technique “counterexample-preserving reduction” (CePRe, for short), and the proposed technique is evaluated by conducting comparative experiments of BDD-based model checking, bounded model checking, and property directed reachability-(IC3) based model checking.

Journal ArticleDOI
TL;DR: The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques, indicating that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.
Abstract: The Particle Swarm Optimization (PSO) Algorithm is a popular optimization method that is widely used in various applications, due to its simplicity and capability in obtaining optimal results. However, ordinary PSOs may be trapped in the local optimal point, especially in high dimensional problems. To overcome this problem, an efficient Global Particle Swarm Optimization (GPSO) algorithm is proposed in this paper, based on a new updated strategy of the particle position. This is done through sharing information of particle position between the dimensions (variables) at any iteration. The strategy can enhance the exploration capability of the GPSO algorithm to determine the optimum global solution and avoid traps at the local optimum. The proposed GPSO algorithm is validated on a 12-benchmark mathematical function and compared with three different types of PSO techniques. The performance of this algorithm is measured based on the solutions’ quality, convergence characteristics, and their robustness after 50 trials. The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques. It can be concluded that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.

Journal ArticleDOI
TL;DR: A Fejer type inequality for harmonically convex functions is established and some properties of the mappings in connection with Hermite-Hadamard and Fejertype inequalities for harmonica convex function are considered.
Abstract: We establish a Fejer type inequality for harmonically convex functions. Our results are the generalizations of some known results. Moreover, some properties of the mappings in connection with Hermite-Hadamard and Fejer type inequalities for harmonically convex functions are also considered.

Journal ArticleDOI
TL;DR: This paper proposes energy-efficient probabilistic routing (EEPR) algorithm, which controls the transmission of the routing request packets stochastically in order to increase the network lifetime and decrease the packet loss under the flooding algorithm.
Abstract: In the future network with Internet of Things (IoT), each of the things communicates with the others and acquires information by itself. In distributed networks for IoT, the energy efficiency of the nodes is a key factor in the network performance. In this paper, we propose energy-efficient probabilistic routing (EEPR) algorithm, which controls the transmission of the routing request packets stochastically in order to increase the network lifetime and decrease the packet loss under the flooding algorithm. The proposed EEPR algorithm adopts energy-efficient probabilistic control by simultaneously using the residual energy of each node and ETX metric in the context of the typical AODV protocol. In the simulations, we verify that the proposed algorithm has longer network lifetime and consumes the residual energy of each node more evenly when compared with the typical AODV protocol.

Journal ArticleDOI
TL;DR: This work introduces the new continuous distribution, the so-called beta-Lindley distribution that extends the Lindley distribution, and provides a comprehensive mathematical treatment of this distribution that derives the moment generating function and the rth moment thus, generalizing some results in the literature.
Abstract: We introduce the new continuous distribution, the so-called beta-Lindley distribution that extends the Lindley distribution. We provide a comprehensive mathematical treatment of this distribution. We derive the moment generating function and the rth moment thus, generalizing some results in the literature. Expressions for the density, moment generating function, and rth moment of the order statistics also are obtained. Further, we also discuss estimation of the unknown model parameters in both classical and Bayesian setup. The usefulness of the new model is illustrated by means of two real data sets. We hope that the new distribution proposed here will serve as an alternative model to other models available in the literature for modelling positive real data in many areas.

Journal ArticleDOI
TL;DR: This iterative technique is based on the use of the reproducing kernel Hilbert space method in which every function satisfies the periodic boundary conditions and enables us to approximate the solutions and their derivatives at every point of the range of integration.
Abstract: The objective of this paper is to present a numerical iterative method for solving systems of first-order ordinary differential equations subject to periodic boundary conditions. This iterative technique is based on the use of the reproducing kernel Hilbert space method in which every function satisfies the periodic boundary conditions. The present method is accurate, needs less effort to achieve the results, and is especially developed for nonlinear case. Furthermore, the present method enables us to approximate the solutions and their derivatives at every point of the range of integration. Indeed, three numerical examples are provided to illustrate the effectiveness of the present method. Results obtained show that the numerical scheme is very effective and convenient for solving systems of first-order ordinary differential equations with periodic boundary conditions.

Journal ArticleDOI
TL;DR: A survey of mathematical models and algorithms used to solve different types of transportation modes (ship, plane, train, bus, truck, Motorcycle, Cars, and others) by air, water, space, cables, tubes, and road is presented.
Abstract: This paper aims at being a guide to understand the different types of transportation problems by presenting a survey of mathematical models and algorithms used to solve different types of transportation modes (ship, plane, train, bus, truck, Motorcycle, Cars, and others) by air, water, space, cables, tubes, and road. Some problems are as follows: bus scheduling problem, delivery problem, combining truck trip problem, open vehicle routing problem, helicopter routing problem, truck loading problem, truck dispatching problem, truck routing problem, truck transportation problem, vehicle routing problem and variants, convoy routing problem, railroad blocking problem (RBP), inventory routing problem (IRP), air traffic flow management problem (TFMP), cash transportation vehicle routing problem, and so forth.

Journal ArticleDOI
TL;DR: A novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC), to tackle complex high-dimensional problems and demonstrates remarkable performance when compared with other six evolutionary algorithms.
Abstract: This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC), to tackle complex high-dimensional problems. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operator is applied to enhance the global search ability between species. Experiments are conducted on a set of 20 continuous and discrete benchmark problems. The experimental results demonstrate remarkable performance of the HABC algorithm when compared with other six evolutionary algorithms.

Journal ArticleDOI
TL;DR: This work analyzes and conducts the equation of each parameter of the services in the data center, and proposes the synthesis optimization mode, function, and strategy that can optimize the average wait time, average queue length, and the number of customer.
Abstract: Successful development of cloud computing has attracted more and more people and enterprises to use it. On one hand, using cloud computing reduces the cost; on the other hand, using cloud computing improves the efficiency. As the users are largely concerned about the Quality of Services (QoS), performance optimization of the cloud computing has become critical to its successful application. In order to optimize the performance of multiple requesters and services in cloud computing, by means of queueing theory, we analyze and conduct the equation of each parameter of the services in the data center. Then, through analyzing the performance parameters of the queueing system, we propose the synthesis optimization mode, function, and strategy. Lastly, we set up the simulation based on the synthesis optimization mode; we also compare and analyze the simulation results to the classical optimization methods (short service time first and first in, first out method), which show that the proposed model can optimize the average wait time, average queue length, and the number of customer.

Journal Article
TL;DR: The ability of the anatomic representation of the knee ligaments (3D geometry along with anisotropic hyperelastic material) in more physiologic prediction of the human knee motion with strong correlation is demonstrated.
Abstract: Finite element (FE) analysis has become an increasingly popular technique in the study of human joint biomechanics, as it allows for detailed analysis of the joint/tissue behavior under complex, clinically relevant loading conditions. A wide variety of modeling techniques have been utilized to model knee joint ligaments. However, the effect of a selected constitutive model to simulate the ligaments on knee kinematics remains unclear. The purpose of the current study was to determine the effect of two most common techniques utilized to model knee ligaments on joint kinematics under functional loading conditions. We hypothesized that anatomic representations of the knee ligaments with anisotropic hyperelastic properties will result in more realistic kinematics. A previously developed, extensively validated anatomic FE model of the knee developed from a healthy, young female athlete was used. FE models with 3D anatomic and simplified uniaxial representations of main knee ligaments were used to simulate four functional loading conditions. Model predictions of tibiofemoral joint kinematics were compared to experimental measures. Results demonstrated the ability of the anatomic representation of the knee ligaments (3D geometry along with anisotropic hyperelastic material) in more physiologic prediction of the human knee motion with strong correlation (r ≥ 0.9 for all comparisons) and minimum deviation (0.9° ≤ RMSE ≤ 2.29°) from experimental findings. In contrast, non-physiologic uniaxial elastic representation of the ligaments resulted in lower correlations (r ≤ 0.6 for all comparisons) and substantially higher deviation (2.6°≤ RMSE ≤ 4.2°) from experimental results. Findings of the current study support our hypothesis and highlight the critical role of soft tissue modeling technique on the resultant FE predicted joint kinematics.

Journal ArticleDOI
TL;DR: The experimental results indicated that the method can be used for identifying the essential botnet features and that the performance of the proposed method was superior to that of genetic algorithms.
Abstract: Because of the advances in Internet technology, the applications of the Internet of Things have become a crucial topic. The number of mobile devices used globally substantially increases daily; therefore, information security concerns are increasingly vital. The botnet virus is a major threat to both personal computers and mobile devices; therefore, a method of botnet feature characterization is proposed in this study. The proposed method is a classified model in which an artificial fish swarm algorithm and a support vector machine are combined. A LAN environment with several computers which has infected by the botnet virus was simulated for testing this model; the packet data of network flow was also collected. The proposed method was used to identify the critical features that determine the pattern of botnet. The experimental results indicated that the method can be used for identifying the essential botnet features and that the performance of the proposed method was superior to that of genetic algorithms.

Journal ArticleDOI
TL;DR: Two classifiers, namely, the asymmetric kernel partial least squares classifier (AKPLSC) and asymmetrickernel principal component analysis classifier(AKPCAC) are proposed for solving the class imbalance problem by applying kernel function to the asymmetry partial least square classifier and asymmetry principal componentAnalysis classifier, respectively.
Abstract: This paper mainly deals with how kernel method can be used for software defect prediction, since the class imbalance can greatly reduce the performance of defect prediction. In this paper, two classifiers, namely, the asymmetric kernel partial least squares classifier (AKPLSC) and asymmetric kernel principal component analysis classifier (AKPCAC), are proposed for solving the class imbalance problem. This is achieved by applying kernel function to the asymmetric partial least squares classifier and asymmetric principal component analysis classifier, respectively. The kernel function used for the two classifiers is Gaussian function. Experiments conducted on NASA and SOFTLAB data sets using F-measure, Friedman’s test, and Tukey’s test confirm the validity of our methods.

Journal ArticleDOI
TL;DR: The solution to the problem of resource integration and optimal scheduling in cloud manufacturing is presented from the perspective of global optimization based on the consideration of sharing and correlation among virtual resources.
Abstract: To deal with the problem of resource integration and optimal scheduling in cloud manufacturing, based on the analyzation of the existing literatures, multitask oriented virtual resource integration and optimal scheduling problem is presented from the perspective of global optimization based on the consideration of sharing and correlation among virtual resources. The correlation models of virtual resources in a task and among tasks are established. According to the correlation model and characteristics of resource sharing, the formulation in which resource time-sharing scheduling strategy is employed is put forward, and then the formulation is simplified to solve the problem easily. The genetic algorithm based on the real number matrix encoding is proposed. And crossover and mutation operation rules are designed for the real number matrix. Meanwhile, the evaluation function with the punishment mechanism and the selection strategy with pressure factor are adopted so as to approach the optimal solution more quickly. The experimental results show that the proposed model and method are feasible and effective both in situation of enough resources and limited resources in case of a large number of tasks.

Journal ArticleDOI
TL;DR: This paper investigates an approach to multiple attribute group decision-making (MAGDM) problems, in which the individual assessments are in the form of triangle interval type-2 fuzzy numbers (TIT2FNs) and develops an approach based on TIT2FFWA (or TIT2 FFWG) operator to solve MAGDM.
Abstract: This paper investigates an approach to multiple attribute group decision-making (MAGDM) problems, in which the individual assessments are in the form of triangle interval type-2 fuzzy numbers (TIT2FNs). Firstly, some Frank operation laws of triangle interval type-2 fuzzy set (TIT2FS) are defined. Secondly, some Frank aggregation operators such as the triangle interval type-2 fuzzy Frank weighted averaging (TIT2FFWA) operator and the triangle interval type-2 fuzzy Frank weighted geometric (TIT2FFWG) operator are developed for aggregation TIT2FNs. Furthermore, some desirable properties of the two aggregation operators are analyzed in detail. Finally, an approach based on TIT2FFWA (or TIT2FFWG) operator to solve MAGDM is developed. An illustrative example about supplier selection is provided to illustrate the developed procedures. The results demonstrate the practicality and effectiveness of our new method.

Journal ArticleDOI
TL;DR: This research focuses on long-term distribution system maintenance scheduling aided by available operation information, which is a prominent advantage of smart grid over conventional distribution systems.
Abstract: Asset management of distribution systems is an important issue for smart grid. Maintenance scheduling, as an important part of asset management, affects the reliability of distribution equipment and power supply. This research focuses on long-term distribution system maintenance scheduling aided by available operation information, which is a prominent advantage of smart grid over conventional distribution systems. In this paper, the historical and future operation information in smart grid is taken into account through a decoupled time-varying reliability model of equipment. Based on distribution system reliability assessment, a maintenance scheduling model is proposed to determine the optimal implementation time of maintenance activities to minimize distribution systems’ total cost, while satisfying reliability requirements. A combined algorithm that consists of particle swarm optimization and tabu search is designed and applied to the optimization problem. Numerical result verifies that the proposed method can schedule long-term maintenance of distribution systems in smart grid economically and effectively.

Journal ArticleDOI
TL;DR: Simulations show that the performance of estimation is improved by the AUKF approach compared with both conventional AKF and UKF, and the nonlinearity of system can be restrained.
Abstract: MEMS/GPS integrated navigation system has been widely used for land-vehicle navigation. This system exhibits large errors because of its nonlinear model and uncertain noise statistic characteristics. Based on the principles of the adaptive Kalman filtering (AKF) and unscented Kalman filtering (AUKF) algorithms, an adaptive unscented Kalman filtering (AUKF) algorithm is proposed. By using noise statistic estimator, the uncertain noise characteristics could be online estimated to adaptively compensate the time-varying noise characteristics. Employing the adaptive filtering principle into UKF, the nonlinearity of system can be restrained. Simulations are conducted for MEMS/GPS integrated navigation system. The results show that the performance of estimation is improved by the AUKF approach compared with both conventional AKF and UKF.

Journal ArticleDOI
TL;DR: A computational code adopting immersed boundary methods for compressible gas-particle multiphase turbulent flows is developed and validated through two-dimensional numerical experiments, and the present scheme is successfully applied to moving two-cylinder problems.
Abstract: A computational code adopting immersed boundary methods for compressible gas-particle multiphase turbulent flows is developed and validated through two-dimensional numerical experiments. The turbulent flow region is modeled by a second-order pseudo skew-symmetric form with minimum dissipation, while the monotone upstream-centered scheme for conservation laws (MUSCL) scheme is employed in the shock region. The present scheme is applied to the flow around a two-dimensional cylinder under various freestream Mach numbers. Compared with the original MUSCL scheme, the minimum dissipation enabled by the pseudo skew-symmetric form significantly improves the resolution of the vortex generated in the wake while retaining the shock capturing ability. In addition, the resulting aerodynamic force is significantly improved. Also, the present scheme is successfully applied to moving two-cylinder problems.

Journal ArticleDOI
TL;DR: A more accurate distribution system security region (DSSR) model is proposed, based on detailed feeder-interconnected topology, and both substation transformer and feeder N-1 contingencies are considered.
Abstract: As an important tool of transmission system dispatching, the region-based method has just been introduced into distribution area with the ongoing smart distribution grid initiatives. First, a more accurate distribution system security region (DSSR) model is proposed. The proposed model is based on detailed feeder-interconnected topology, and both substation transformer and feeder N-1 contingencies are considered. Second, generic characteristics of DSSR are discussed and mathematically proved. That is, DSSR is a dense set of which boundary has no suspension and can be expressed by several union subsurfaces. Finally, the results from both a test case and a practical case demonstrate the effectiveness of the proposed modeling approach; the shape of DSSR is also illustrated by means of 2- and 3-dimensional visualization. Moreover, the DSSR-based assessment and control are preliminary illustrated to show the application of DSSR. The researches in this paper are fundamental work to develop new security region theory for future distribution systems.

Journal ArticleDOI
TL;DR: Novel definitions of score function and distance measure for HFSs are developed and the MULTIMOORA method is extended, which provides the means for multiple criteria decision making (MCDM) regarding uncertain assessments.
Abstract: In order to determine the membership of an element to a set owing to ambiguity between a few different values, the hesitant fuzzy set (HFS) has been proposed and widely diffused to deal with vagueness and uncertainty involved in the process of multiple criteria group decision making (MCGDM) problems. In this paper, we develop novel definitions of score function and distance measure for HFSs. Some examples are given to illustrate that the proposed definitions are more reasonable than the traditional ones. Furthermore, our study extends the MULTIMOORA (Multiple Objective Optimization on the basis of Ratio Analysis plus Full Multiplicative Form) method with HFSs. The proposed method thus provides the means for multiple criteria decision making (MCDM) regarding uncertain assessments. Utilization of hesitant fuzzy power aggregation operators also enables facilitating the process of MCGDM. A numerical example of software selection demonstrates the possibilities of application of the proposed method.