scispace - formally typeset
Search or ask a question

Showing papers in "Turkish Journal of Electrical Engineering and Computer Sciences in 2017"


Journal ArticleDOI
TL;DR: A new robust direct torque control strategy based on second order continuous sliding mode and space vector modulation of a doubly fed induction generator integrated in a wind energy conversion system is presented.
Abstract: In this paper a new robust direct torque control strategy based on second order continuous sliding mode and space vector modulation of a doubly fed induction generator integrated in a wind energy conversion system is presented. The conventional direct torque control (C-DTC) with hysteresis regulators has significant flux and torque ripples at steady-state operation and also the switching frequency varies in a wide range. The proposed DTC technique based on second order continuous sliding mode control reduces flux, current, and torque ripples. It also narrows down the switching frequency variations in induction machine control. Two different sliding surfaces such as flux and torque sliding surfaces are used to control them. The error between reference and actual values are driven to respective sliding surfaces where the error is enforced to zero. Simulation results show the effectiveness of the proposed direct torque control strategy comparatively to the C-DTC one.

62 citations


Journal ArticleDOI
TL;DR: The numerical approximation to the set of the partial differential equations governing a typical magnetostatic problem is presented by using SEM for the first time to the best of the authors' knowledge.
Abstract: Recently, we have seen good progress in our capability to simulate complex electromagnetic systems. However, still there exist many challenges that have to be tackled in order to push limits restricting the field of computational electromagnetics upward. One of these challenges is the limitations in the available computational resources. Over several decades, the traditional computational methods, such as finite difference, finite element, and finite volume methods, have been extensively applied in the field of electromagnetics. On the other hand, the spectral element method (SEM) has been recently utilized in some branches of electromagnetics as waveguides and photonic structures for the sake of accuracy. In this paper, the numerical approximation to the set of the partial differential equations governing a typical magnetostatic problem is presented by using SEM for the first time to the best of our knowledge. Legendre polynomials and Gauss– Legendre–Lobatto grids are employed in the current study as test functions and meshing of the elements, respectively. We also simulate a magnetostatic problem in order to verify the SEM formulation adapted in the current study.

41 citations


Journal ArticleDOI
TL;DR: In this paper, the spectral element method (SEM) was used to solve near and far electromagnetic fields without requiring substantially computational resources, and it is demonstrated that it is possible to accurately solve for near-field and far-field zones where millions of unknowns are encountered by using a typical personal computer.
Abstract: One of the challenges in electromagnetics is to solve electromagnetic fields originated from a radiating source or a scattering object in distances that largely exceed the dimensions of the source or scatterer. In this paper, domain decomposition based on the application reasoning of the perfectly matched layer (PML) is studied by the spectral element method (SEM) for the first time in order to solve near and far electromagnetic fields without requiring substantially computational resources. Scattering from a loss-free dielectric cylinder and radiated fields from a line source are the problems utilized for the purpose of numerical demonstration in two dimensions in order to verify the successful application of SEM and its accuracy under domain decomposition. It is demonstrated that it is possible to accurately solve for near-field and far-field zones where millions of unknowns are encountered by using a typical personal computer.

37 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed controller provides a powerful framework to control the PMBLDC motor and improvement in the overall performance of the system is observed using the proposed FOPID controller.
Abstract: This paper deals with the speed control of a permanent-magnet brushless direct current (PMBLDC) motor. A fractional order PID (FOPID) controller is used in place of the conventional PID controller. The FOPID controller is a generalized form of the PID controller in which the order of integration and differentiation is any real number. It is shown that the proposed controller provides a powerful framework to control the PMBLDC motor. Parameters of the controller are found by using a novel dynamic particle swarm optimization (dPSO) method. The frequency domain pole-zero (p-z) interlacing method is used to approximate the fractional order operator. A three-phase inverter with four switches is used in place of the conventional six-switches inverter to suggest a cost-effective control scheme. The digital controller has been implemented using a field programmable gate array (FPGA). The control scheme is verified using the FPGA-in-the-loop (FIL) wizard of MATLAB/Simulink. Improvement in the overall performance of the system is observed using the proposed FOPID controller. The energy efficient nature of the FOPID controller is also demonstrated.

37 citations


Journal ArticleDOI
TL;DR: This study introduces a PRNG that generates bit sequences by sampling two Arnold cat map outputs that successfully passed all the analytical tests and can be safely used for the many applications of randomness.
Abstract: Pseudorandom number generators (PRNGs) generate random bit streams based on deterministic algorithms. Any bit stream generated with a PRNG will repeat itself at a certain point, and the bit streams will become correlated. As a result, all bit streams generated in this manner are statistically weak. Such weakness leads to a strong connection between PRNGs and chaos, which is characterized by ergodicity, confusion, complexity, sensitivity to initial conditions, and dependence on control parameters. In this study, we introduce a PRNG that generates bit sequences by sampling two Arnold cat map outputs. The statistical randomness of bit streams obtained using this PRNG was verified by statistical analyses such as the NIST test suite, the scale index method, statistical complexity measures, and autocorrelation. The generated bit streams successfully passed all the analytical tests and can be safely used for the many applications of randomness.

33 citations


Journal ArticleDOI
TL;DR: Promising properties of SiR nanocomposites among all the investigated samples suggest that their application is more suitable for utilization in outdoor electrical insulations.
Abstract: In this paper the influence of micro-sized (3 μm) and nano-sized (10 nm) silica (SiO2) on mechanical, thermal, and electrical properties of silicon rubber (SiR), epoxy, and ethylene propylene diene monomer (EPDM) composites is presented. The microand nano-sized SiO2 particle-filled SiR, epoxy, and EPDM composites were formulated with 20% microsilica and 5% nanosilica by weight respectively. Among these composites, SiR-SiO2 amalgamation was performed by mixing using an ultrasonication procedure. Epoxy-SiO2 was compounded in two steps, i.e. dispersion of fillers and mixing, whereas EPDM-SiO2 compounding was performed with a two roll mill technique. With the addition of micro/nano-SiO2 , the composites showed enhanced tensile strength of ∼2.7 MPa, improved hardness, and reduced elongation at break. Further incorporation of micro/nano-sized particles resulted in high thermal stability for SiRnano composites (SNCs) as compared to epoxy and EPDM composites. Compared to SiR and EPDM composites, epoxy-nano composites showed the highest value of dielectric strength, i.e. 38.8 kV/mm. Meanwhile, the volume and surface resistivity of SNCs were found higher as compared to other investigated samples. Promising properties of SiR nanocomposites among all the investigated samples suggest that their application is more suitable for utilization in outdoor electrical insulations.

33 citations


Journal ArticleDOI
TL;DR: Simulation results on the cross utilization of a photovoltaic/wind/battery/fuel-cell hybrid-energy source to power the off-grid living space are presented and simultaneous exploitation of different renewable energy sources to power off- grid applications together with battery and hydrogen energy storage options is demonstrated.
Abstract: Remote areas in UAE are still fully powered by diesel generators. The rapid rise in the prices of petroleum products and environmental concerns have led to demand for hybrid renewable energy generators. Advances in renewable energy technologies impart further impetus to techno-economic power production. Protecting the landscape and demography of the desert by optimizing the power supply cost via an efficient hybrid power system is the key issue. Simulation results on the cross utilization of a photovoltaic/wind/battery/fuel-cell hybrid-energy source to power the off-grid living space are presented. Simultaneous exploitation of different renewable energy sources to power off-grid applications together with battery and hydrogen energy storage options is demonstrated. HOMER software from the National Renewable Energy Laboratory (NREL) is used to perform detailed techno-economic analyses. It simulates all possible system configurations that fulfil the suggested load for the selected sites under given conditions of renewable resources.

32 citations


Journal ArticleDOI
TL;DR: The simulation results show that coordination among protection devices can be regained using fast operation of the recloser to design a fuse saving scheme in the scenario of temporary fault occurrence and the designed scheme works satisfactorily for the isolation of a permanently faulted section of the feeder.
Abstract: This paper proposes an effective strategy to overcome the impacts on coordination among protection devices due to distributed generator (DG) integration. Increased fault current magnitude and changes in power flow directions are the major impacts imposed by DGs on a typical distribution system. The recloser-fuse coordination is much influenced upon the integration of DGs. The proposed approach presents the rehabilitation of recloser-fuse coordination for post-DG integration effects using the directional properties of a recloser. The simulation results show that coordination among protection devices can be regained using fast operation of the recloser to design a fuse saving scheme in the scenario of temporary fault occurrence. The designed scheme also works satisfactorily for the isolation of a permanently faulted section of the feeder. This technique is verified by simulation results performed on a real 11-kV radial distribution feeder for different fault locations and DG sizes.

32 citations


Journal ArticleDOI
TL;DR: This paper attempts to develop a neural network approach equipped with statistical dimension reduction techniques to perform exact and fast robot navigation, as well as obstacle avoidance in such a manner using two feedforward neural networks based on function approximation with a backpropagation learning algorithm.
Abstract: Mobile robot navigation and obstacle avoidance in dynamic and unknown environments is one of the most challenging problems in the field of robotics Considering that a robot must be able to interact with the surrounding environment and respond to it in real time, and given the limited sensing range, inaccurate data, and noisy sensor readings, this problem becomes even more acute In this paper, we attempt to develop a neural network approach equipped with statistical dimension reduction techniques to perform exact and fast robot navigation, as well as obstacle avoidance in such a manner In order to increase the speed and precision of the network learning and reduce the noise, kernel principal component analysis is applied to the training patterns of the network The proposed method uses two feed-forward neural networks based on function approximation with a back-propagation learning algorithm Two different data sets are used for training the networks In order to visualize the robot environment, 180◦ laser range sensor (SICK) readings are employed The method is tested on real-world data and experimental results are included to verify the effectiveness of the proposed method

29 citations


Journal ArticleDOI
TL;DR: The main goal of this study is to measure r1 and r2 relaxivities of three common paramagnetic agents (CuSO4, MnCl2 , and NiCl2) at room temperature at 3 T, and to serve as a practical reference to design phantoms of target T1 and T2 values at3 T, in particularPhantoms with relaxation times equivalent to specific human tissues.
Abstract: Phantoms with known T1 and T2 values that are prepared using solutions of easily accessible paramagnetic agents are commonly used in MRI imaging centers, especially with the goal of validating the accuracy of quantitative imaging protocols. The relaxivity parameters of several agents were comprehensively examined at lower B0 field strengths, but studies at 3 T remain limited. The main goal of this study is to measure r1 and r2 relaxivities of three common paramagnetic agents (CuSO4 , MnCl2 , and NiCl2) at room temperature at 3 T. Separate phantoms were prepared at various concentrations of 0.05–0.5 mM for MnCl2 and 1–6 mM for CuSO4 and NiCl2 . For assessment of T1 relaxation times, inversion recovery turbo spin echo images were acquired at 15 inversion times ranging between 24 and 2500 ms. For assessment of T2 relaxation times, spin-echo images were acquired at 15 echo times ranging between 8.5 and 255 ms. Voxel-wise T1 and T2 relaxation times at each concentration were separately determined from the respective signal recovery curves (inversion recovery for T1 and spin echo decay for T2) . Relaxivities r1 and r2 for all three agents that were derived from these relaxation time measurements are reported: r1 = 0.602 mM −1 s−1 and r2 = 0.730 mM−1 s−1 for CuSO4 , r1 = 6.397 mM −1 s−1 and r2 = 108.266 mM −1 s−1 for MnCl2 , r1 = 0.620 mM −1 s−1 and r2 = 0.848 mM −1 s−1 for NiCl2 . These results will serve as a practical reference to design phantoms of target T1 and T2 values at 3 T, in particular phantoms with relaxation times equivalent to specific human tissues.

29 citations


Journal ArticleDOI
TL;DR: An improved dynamic window approach is proposed, which takes into account the relation between the size of the mobile robot and the free space between obstacles, to improve the ability of sensing and prediction of the environment.
Abstract: The dynamic window approach has the drawback that it may result in local minima and nonoptimal motion decision for obstacle avoidance because of not considering the size constraint of a mobile robot. Thus, an improved dynamic window approach is proposed, which takes into account the relation between the size of the mobile robot and the free space between obstacles. A laser range finder is employed to improve the ability of sensing and prediction of the environment, in order to avoid being trapped in a U-shaped obstacle, such as a box canyon. By applying the proposed method, the local minima problem can be solved and the optimal path can be obtained. The effectiveness and superiority is proven by theoretic analysis and simulations.

Journal ArticleDOI
TL;DR: In this paper, a dual broadband antenna is proposed for WLAN communication modules supported by the IEEE 802.11 ac/b/g/n standard, which is based on two interspaced electrically small split-ring resonators, each of which is directly fed through the stepped impedance microstrip line with optimized electromagnetic coupling distance in between.
Abstract: In this paper, a dual broadband antenna is proposed for WLAN communication modules supported by the IEEE 802.11 ac/b/g/n standard. The antenna design is based on two interspaced electrically small split-ring resonators, each of which is directly fed through the stepped impedance microstrip line with an optimized electromagnetic coupling distance in between. The proposed dual band antenna operates in the lower frequency band from 2.3 GHz up to 3 GHz with 2.65 GHz center frequency (26.4 % bandwidth) and in the higher frequency band from 4.7 GHz up to 6 GHz with 5.35 GHz center frequency (24.3% bandwidth). The radiating section of the antenna is λ0 /5.8 ×λ0 /10.2 at 2.4 GHz in the lower WLAN frequency band. The return loss is numerically calculated and experimentally measured with the result of good agreement between them. The antenna gain values are 4.77 dBi, 2.9 dBi, and 2.45 dBi at 2.45 GHz, 5.2 GHz, and 5.8 GHz, respectively, with omnidirectional radiation patterns at the horizontal plane. The omnidirectional radiation patterns at both frequency bands allow the proposed WLAN antenna to be utilized for modern mobile broadband wireless network applications.

Journal ArticleDOI
TL;DR: This study presents an exact, closed-form expression for series resistance ($R_{s}$) that is solvable numerically while increasing the value of the ideality factor ($a)$ in small increments, so that four equations are formulated to compute the unknown parameters.
Abstract: A simple and accurate technique to compute essential parameters needed for electrical characterization of photovoltaic (PV) modules is proposed. A single-diode model of PV modules, including those with series and shunt resistances, is considered accurate and simple. However, PV module datasheets provided by manufacturers provide current-voltage ($I$-$V)$ characteristics as well as the values of selected parameters at standard test conditions (STC), i.e. solar radiation of 1000 W/m$^{2}$, air temperature of 25 $^{\circ}$C, and air mass AM = 1.5. Consequently, important parameters such as series resistance ($R_{s})$, shunt resistance ($R_{sh})$, photocurrent ($I_{ph})$, and diode reverse-saturation current ($I_{o})$ are not provided by most manufacturers. Since these parameters are crucial for PV module characterization, our study presents an exact, closed-form expression for $R_{s}$ that is solvable numerically while increasing the value of the ideality factor ($a)$ in small increments, so that four equations are formulated to compute the unknown parameters. To validate the proposed approach, a set of $I-V$ curves were computed for different values of $a$, and these results were compared alongside corresponding reference data for the BP SX150 and MSX60 PV modules. Average RMS errors of 0.035 for BP SX150 and 0.014 for MSX60 between the reference data and the computed data suggest that proposed approach could be used as an alternative method to quantify important missing parameters required for characterization of PV modules.


Journal ArticleDOI
TL;DR: The proposed radiation hardened SRAM cells are capable of fully tolerating single event upsets and show a high degree of robustness against single event multiple upsets (SEMUs) and over the previousSRAM cells, RATF1 and RATf2 offer lower area and power overhead.
Abstract: In this article, two soft error tolerant SRAM cells, the so-called RATF1 and RATF2, are proposed and evaluated. The proposed radiation hardened SRAM cells are capable of fully tolerating single event upsets (SEUs). Moreover, they show a high degree of robustness against single event multiple upsets (SEMUs). Over the previous SRAM cells, RATF1 and RATF2 offer lower area and power overhead. The Hspice simulation results through comparison with some prominent and state-of-the-art soft error tolerant SRAM cells show that our proposed robust SRAM cells have smaller area overhead (RAFT1 offers 58% smaller area than DICE), lower power delay product (RATF1 offers 231.33% and RATF2 offers 74.75% lower PDP compared with DICE), much more soft error robustness, and larger noise margins.

Journal ArticleDOI
TL;DR: The outcome of this research is the identification of different combination of traits for achieving high yield in paddy crop and the final rules extracted are useful for farmers to make proactive and knowledge-driven decisions before harvest.
Abstract: Agriculture has a great impact on the economy of developing countries. To provide food security for people, there is a need for improving the productivity of major crops. Rapidly changing climatic conditions and the cost of investment in agriculture are major barriers for small-holder farmers. The proposed research aims to develop a predictive model that provides a cultivation plan for farmers to get high yield of paddy crops using data mining techniques. Unlike statistical approaches, data mining techniques extract hidden knowledge through data analysis. The data set used in this research for mining process is real data collected from farmers cultivating paddy along the Thamirabarani river basin. K-means clustering and various decision tree classifiers are applied to meteorological and agronomic data for the paddy crop. The performance of various classifiers is validated and compared. Based on experimentation and evaluation, it has been concluded that the random forest classifier outperforms the other classification methods. Moreover, classification of clustered data provides good classification accuracy. The outcome of this research is the identification of different combination of traits for achieving high yield in paddy crop. The final rules extracted by this research are useful for farmers to make proactive and knowledge-driven decisions before harvest.

Journal ArticleDOI
TL;DR: This work focuses on the evaluation of the randomness of data to give a unified result that considers all statistical information obtained from different tests in the suite, and an efficient subsuite that contains five statistical randomness tests is proposed.
Abstract: Random numbers and random sequences are used to produce vital parts of cryptographic algorithms such as encryption keys and therefore the generation and evaluation of random sequences in terms of randomness are vital. Test suites consisting of a number of statistical randomness tests are used to detect the nonrandom characteristics of the sequences. Construction of a test suite is not an easy task. On one hand, the coverage of a suite should be wide; that is, it should compare the sequence under consideration from many different points of view with true random sequences. On the other hand, an overpopulated suite is expensive in terms of running time and computing power. Unfortunately, this trade-off is not addressed in detail in most of the suites in use. An efficient suite should avoid use of similar tests, while still containing sufficiently many. A single statistical test gives a measure for the randomness of the data. A collection of tests in a suite give a collection of measures. Obtaining a single value from this collection of measures is a difficult task and so far there is no conventional or strongly recommended method for this purpose. This work focuses on the evaluation of the randomness of data to give a unified result that considers all statistical information obtained from different tests in the suite. A natural starting point of research in this direction is to investigate correlations between test results and to study the independences of each from others. It is started with the concept of independence. As it is complicated enough to work even with one test function, theoretical investigation of dependence between many of them in terms of conditional probabilities is a much more difficult task. With this motivation, in this work it is tried to get some experimental results that may lead to theoretical results in future works. As experimental results may reflect properties of the data set under consideration, work is done on various types of large data sets hoping to get results that give clues about the theoretical results. For a collection of statistical randomness tests, the tests in the NIST test suite are considered. Tests in the NIST suite that can be applied to sequences shorter than 38,912 bits are analyzed. Based on the correlation of the tests at extreme values, the dependencies of the tests are found. Depending on the coverage of a test suite, a new concept, the coverage efficiency of a test suite, is defined, and using this concept, the most efficient, the least efficient, and the optimal subsuites of the NIST suite are determined. Moreover, the marginal benefit of each test, which also helps one to understand the contribution of each individual test to the coverage efficiency of the NIST suite, is found. Furthermore, an efficient subsuite that contains five statistical randomness tests is proposed.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed method is more robust to fault resistance compared to previous studies and determines fault occurrence, faulted phase, and fault section under different fault conditions such as fault type, fault location, fault resistance, fault inception angle, source impedance, reverse power flow, different levels of compensation, and different locations of the compensator in the line.
Abstract: In this paper, a novel protection method based on time-time (TT) transform for thyristor-controlled seriescompensated lines is presented. First, current signals at both sides of the sending and receiving ends are retrieved and processed through time-time domain transform (TT-transform), and a TT-matrix is produced. A proposed index is then compared with a defined threshold (THD) in order to determine fault occurrence and faulted phases. Within less than three cycles of the fault inception, a tripping signal can be sent that is acceptable for the speed of digital relays. After faulted phase selection, considering the TT-matrix of the faulted phases of both the sending and receiving ends, another index is introduced for estimation of the fault section. Simulation results show that this approach determines fault occurrence, faulted phase, and fault section under different fault conditions such as fault type, fault location, fault resistance, fault inception angle, source impedance, reverse power flow, different levels of compensation, and different locations of the compensator in the line. The test results in the presence of high noise (with SNR up to 15 dB) confirm the effectiveness of the proposed method. The results also indicate that the proposed method is more robust to fault resistance compared to previous studies.

Journal ArticleDOI
TL;DR: A novel feature selection approach, based on extreme learning machines (ELMs) and the coefficient of variation (CV), where the most relevant features are identified by ranking each feature with the coefficient obtained through ELM divided by CV.
Abstract: Feature selection is the method of reducing the size of data without degrading their accuracy. In this study, we propose a novel feature selection approach, based on extreme learning machines (ELMs) and the coefficient of variation (CV). In the proposed approach, the most relevant features are identified by ranking each feature with the coefficient obtained through ELM divided by CV. The achieved accuracies and computational costs, obtained with the use of features selected via the proposed approach in 9 classification and 26 regression benchmark data sets, were compared to those obtained with all features, as well as those obtained with the features selected by a wrapper and a filtering method. The achieved accuracy values obtained with the proposed approach were generally higher than when using all features. Furthermore, high feature reduction ratios were obtained with the proposed approach, including the achieved feature reduction ratios in epilepsy, liver, EMG, shuttle, and abalone. Stock data sets were 90.48%, 90%, 70.59%, 66.67%, 75%, and 77.78%, respectively. This approach is an extremely fast process that is independent of the employed machine-learning methods.

Journal ArticleDOI
TL;DR: An adaptive neuro-fuzzy inference system with fuzzy c-means clustering (FCM-ANFIS) for Android malware classification and results show that the proposed approach achieves the highest classification accuracy of 91%, with lowest false positive and false negative rates.
Abstract: Mobile phones have become an essential part of our lives because we depend on them to perform many tasks, and they contain personal and important information. The continuous growth in the number of Android mobile applications resulted in an increase in the number of malware applications, which are real threats and can cause great losses. There is an urgent need for efficient and effective Android malware detection techniques. In this paper, we present an adaptive neuro-fuzzy inference system with fuzzy c-means clustering (FCM-ANFIS) for Android malware classification. The proposed approach utilizes the FCM clustering method to determine the optimum number of clusters and cluster centers, which improves the classification accuracy of the ANFIS. The most significant permissions used in the Android application selected by the information gain algorithm are used as input to the proposed approach (FCM-ANFIS) to classify applications as either malware or benign applications. The experimental results show that the proposed approach (FCM-ANFIS) achieves the highest classification accuracy of 91%, with lowest false positive and false negative rates of 0.5% and 0.4%, respectively.

Journal ArticleDOI
TL;DR: This table-like structure-based greedy view selection (TSGV) method is evaluated using the queries of an analytical database, and the query-processing and view maintenance costs of the selected subset are both considered in this evaluation.
Abstract: Since a data warehouse deals with huge amounts of data and complex analytical queries, online processing and answering to users' queries in data warehouses can be a serious challenge. Materialized views are used to speed up query processing rather than direct access to the database in on-line analytical processing. Since the large number and high volume of views prevents all of the views from being stored, selection of a proper subset of views to materialization is inevitable. Proposing an appropriate method for selecting the optimal subset of views for materialization plays an essential role in increasing the efficiency of responding to data warehouse queries. In this paper, a greedy materialized view selection algorithm is represented, which selects a proper set of views for materialization from a novel table-like structure. The information in this table-like structure is extracted from a multivalue processing plan. This table-like structure-based greedy view selection (TSGV) method is evaluated using the queries of an analytical database, and the query-processing and view maintenance costs of the selected subset are both considered in this evaluation. The experimental results show that TSGV operates better than previously represented methods in terms of time.

Journal ArticleDOI
TL;DR: It was found that using BIST100 stock features boosts classification results for all stocks in terms of accuracy, and therefore, feature selection methods are utilized to select the most informative features.
Abstract: Stock market prediction is a very noisy problem and the use of any additional information to increase accuracy is necessary. In this paper, for the stock daily return prediction problem, the set of features is expanded to include indicators not only for the stock to be predicted itself but also a set of other stocks and currencies. Afterwards, different feature selection and classification methods are utilized for prediction. The daily close returns of the 3 most traded stocks (GARAN, THYAO, and ISCTR) in Borsa Istanbul (BIST) are predicted using indicators computed on those stocks, indicators for all the other stocks listed in the BIST100 index, and indicators on the dollar-gold prices. Twenty-five different indicators on daily stock prices are computed to form feature vectors for each trading day. These feature vectors are assigned class labels according to the daily close returns. Expanding the feature space with BIST100 stocks features results in a high dimensional feature space, with possibly noisy or irrelevant features. Therefore, feature selection methods are utilized to select the most informative features. In order to determine relevance scores of features, fast filter-based methods, gain ratio and relief, are used. Experiments are performed based on individual stock features, dollar-gold features (DG), BIST100 stock features (BIST100), and a combination of BIST100 and DG with and without feature selection. Using the gain ratio feature selection with a gradient boosting machine (GBM), the movements of GARAN stock were predicted with an accuracy of 0.599 and an F-measure of 0.614. For THYAO, the relief feature selection with the GBM gave an accuracy of 0.558, and for ISCTR, the gain ratio feature selection with logistic regression achieved an accuracy of 0.581. It was found that using BIST100 stock features boosts classification results for all stocks in terms of accuracy.

Journal ArticleDOI
TL;DR: The solution in the form of safe fonts is a universal method that protects process information against electromagnetic penetration and can be used for the protection of analog standard VGA, digital standard DVI, and printers with one diode and two diode laser systems.
Abstract: Due to the widespread use of computer equipment, electromagnetic protection of processed data is still an issue. Structurally modified commercial equipment is used to protect devices against this phenomenon. The acquisition costs of such modified devices are enormous. However, the market offers information devices with very low susceptibility to electromagnetic infiltration. Safe fonts are a new solution in the protection of sensitive information against electromagnetic infiltration processes. The use of safe fonts not only increases resistance to electromagnetic eavesdropping but also makes it impossible. These devices are computer printers that use a slat with hundreds of LEDs arranged in several rows during the process of photoconductor exposure. The solution in the form of safe fonts is a universal method that protects process information against electromagnetic penetration. Safe fonts are effective not only for printers with slat LED. The solution can also be used for the protection of analog standard VGA, digital standard DVI, and printers with one diode and two diode laser systems.

Journal ArticleDOI
TL;DR: The automated diagnosis of iris nevus is described using neural network-based systems for the classification of eye images as “nevus affected” and “unaffected”, which can be used satisfactorily for diagnosis or to reinforce the confidence in manual-visual diagnosis by medical experts.
Abstract: This work presents the diagnosis of iris nevus using a convolutional neural network (CNN) and deep belief network (DBN). Iris nevus is a pigmented growth (tumor) found in the front of the eye or around the pupil. It is seen that racial and environmental factors affect the iris color (e.g., blue, hazel, brown) of patients; hence, pigmented growths may be masked in the eye background or iris. In this work, some image processing techniques are applied to images to reinforce areas of interests in them, after which the considered classifiers are trained. We describe the automated diagnosis of iris nevus using neural network-based systems for the classification of eye images as “nevus affected” and “unaffected”. Recognition rates of 93.35% and 93.67% were achieved for the CNN and DBN, respectively. Hence, the systems described in this work can be used satisfactorily for diagnosis or to reinforce the confidence in manual-visual diagnosis by medical experts.

Journal ArticleDOI
TL;DR: The proposed MVO fuzzy-PIDF controller exhibits the best performance under different operating conditions in terms of settling times, maximum overshoot, and values of cost function, i.e. integral time absolute error.
Abstract: In this paper, a multiverse optimized (MVO) fuzzy PID controller with a derivative filter (fuzzy-PIDF) is proposed for the load frequency control (LFC) of a two-area multisource hydrothermal power system. The superiority of the MVO algorithm is demonstrated by comparing the system LFC performance with integral and fuzzy-PIDF controllers, both optimized using MVO, as well as some of the recent heuristic optimization techniques such as the ant lion optimizer, gray wolf optimizer, differential evolution, bacterial foraging optimization algorithm, and particle swarm optimization. To the best of the knowledge of the authors, the use of the MVO technique has not yet been reported for LFC studies. Among many of the controllers implemented here for comparison, the proposed MVO fuzzy-PIDF controller exhibits the best performance under different operating conditions in terms of settling times, maximum overshoot, and values of cost function, i.e. integral time absolute error. Furthermore, the robustness of the proposed control scheme is also investigated against variation of system parameters within ±10%, along with random step load disturbances. The proposed control scheme is not very sensitive to parametric variations and therefore keeps providing effective performance even under ±10% variations in system parameters. System modeling and simulations are carried out using MATLAB/Simulink.

Journal ArticleDOI
TL;DR: The proposed floating capacitance multiplier can realize high capacitor values with two small-valued resistors and is more suitable for integrated circuit technology because it has only grounded passive components without needing any critical passive component matching conditions.
Abstract: Abstract:In this paper, a floating capacitance multiplier including two multioutput differential voltage current conveyors, two grounded resistors, and a grounded capacitor is proposed. The proposed floating capacitance multiplier can realize high capacitor values with two small-valued resistors. It is more suitable for integrated circuit technology because it has only grounded passive components without needing any critical passive component matching conditions. Its performances are examined with several simulations using the SPICE program. As an application example, a third-order notch filter using three resistors and three capacitors is given.

Journal ArticleDOI
TL;DR: This paper shows that sliding mode (SM) can give better control due to its robust property but it has an unwanted chattering problem associated with it, which is harmful to the plant, so a hybrid controller is presented in which a fuzzy controller is combined with the SM controller by a fuzzy supervisory system.
Abstract: In many applications of DC motor speed control systems, PID controllers are mostly used. Such control schemes do not show good performance when input and load torque disturbances are applied. This paper shows that sliding mode (SM) can give better control due to its robust property but it has an unwanted chattering problem associated with it, which is harmful to the plant. To limit this problem a hybrid controller is presented in which a fuzzy controller is combined with the SM controller by a fuzzy supervisory system in such a way that the SM controller works in transient state and the fuzzy controller in steady state. The simulation results show that the SM controller is more robust when compared to PID and fuzzy control and hybrid control gives overall better system performance and reduced chattering when compared to SM control. Simulations are done using MATLAB software.

Journal ArticleDOI
TL;DR: The proposed concept utilizes the different reactive power optimization algorithms and performs a comparison and reinforces the outperformance of the based optimization algorithm to the other algorithm, thereby providing high stability to the system.
Abstract: Reactive power optimization (RPO) in a power system is a rudimentary necessity for the reduction of the loss of power. For the requirement of a unity power factor in the RPO system, the reduction of the system losses is ensured. The pivotal requirements of a power system are inclusive of a perfect compensation technique and methodology for stable reactive power compensation. The proposed concept in this paper utilizes the different reactive power optimization algorithms and performs a comparison. The process is accomplished by the use of IEEE 6-bus, 14-bus, and 30-bus systems to test the optimization technique. The conclusive information reinforces the outperformance of the based optimization algorithm to the other algorithm, thereby providing high stability to the system. The algorithm ensures the confinement of the voltage profile of the system within the permissible limits.

Journal ArticleDOI
TL;DR: A novel framework based on model-based clustering is introduced to fight against phishing websites and it is revealed that the proposed algorithm has high accuracy.
Abstract: Phishing websites are fake ones that are developed by ill-intentioned people to imitate real and legal websites. Most of these types of web pages have high visual similarities to hustle the victims. The victims of phishing websites may give their bank accounts, passwords, credit card numbers, and other important information to the designers and owners of phishing websites. The increasing number of phishing websites has become a great challenge in e-business in general and in electronic banking specifically. In the present study, a novel framework based on model-based clustering is introduced to fight against phishing websites. First, a model is developed out of those websites that already have been identified as phishing websites as well as real websites that belong to the original owners. Then each new website is compared with the model and categorized into one of the model clusters by a probability. The analyses reveal that the proposed algorithm has high accuracy.

Journal ArticleDOI
TL;DR: Experiments are presented to show that there is a substantial increase in accuracy of the recommendations produced by DBSCAN based on proximity analysis of in-text citations compared to traditional bibliographic coupling and content-based approaches.
Abstract: Research paper recommendation has been a hot research area for the last few decades. Thus far, numerous different paper recommendation approaches have been proposed. Some of these include methods based on metadata, content similarity, collaborative filtering, and citation analysis, among others. Citation analysis methods include bibliographic coupling and co-citation analysis. Much research has been done in the area of co-citation analysis. Researchers have also performed experiments using the proximity of in-text citations in co-citation analysis and have found that it improves the accuracy of paper recommendation. In co-citation analysis, the similarity is discovered based on the frequency of co-cited papers in different research papers and those citing papers may belong to different areas. However, when proximity is used to calculate co-citation, the accuracy of recommendations improves significantly. Bibliographic coupling finds bibliographic coupling strength based on the common references between two papers. In bibliographic coupling, a large number of common references of two papers means that they belong to the same area, unlike co-citation analysis, in which there is a possibility that the citing papers may belong to different areas. Based on the observation that with the use of proximity analysis the accuracy in cases of co-citation analysis has improved, this paper investigates if the accuracy of paper recommendation can be further improved by using proximity analysis in bibliographic coupling. This paper proposes an approach that extends the traditional bibliographic coupling by exploiting the proximity of in-text citations of bibliographically coupled articles. The proposed approach takes into account the proximity of in-text citations by clustering the in-text citations using a density-based algorithm called DBSCAN. Experiments on a data set of research papers are presented to show that there is a substantial increase in accuracy of the recommendations produced by DBSCAN based on proximity analysis of in-text citations compared to traditional bibliographic coupling and content-based approaches.