scispace - formally typeset
Search or ask a question

Showing papers in "Bulletin of Electrical Engineering and Informatics in 2023"


Journal ArticleDOI
TL;DR: In this paper , the authors discuss various machine learning (ML) and deep learning (DL) techniques for identifying and analyzing DDoS attacks, and compare the significant distinctions between ML and DL techniques to aid in determining when one of these techniques should be used.
Abstract: The security of the internet is seriously threatened by a distributed denial of service (DDoS) attacks. The purpose of a DDoS assault is to disrupt service and prevent legitimate users from using it by flooding the central server with a large number of messages or requests that will cause it to reach its capacity and shut down. Because it is carried out by numerous bots that are managed (infected) by a single botmaster using a fake IP address, this assault is dangerous because it does not involve a lot of work or special tools. For the purpose of identifying and analyzing DDoS attacks, this paper will discuss various machine learning (ML) and deep learning (DL) techniques. Additionally, this study analyses and comparatives the significant distinctions between ML and DL techniques to aid in determining when one of these techniques should be used.

9 citations


Journal ArticleDOI
TL;DR: In this paper , a family of video anomaly detection approaches based on deep learning techniques are compared in terms of their algorithms and models, and state-of-the-art methods are grouped into different categories based on the approach adopted to differentiate between normal and abnormal events, and the underlying assumptions.
Abstract: Detecting anomalous events in videos is one of the most popular computer vision topics. It is considered a challenging task in video analysis due to its definition, which is subjective or context-dependent. Various approaches have been proposed to address the anomaly detection problems. These approaches vary from hand-crafted to deep learning. Many researchers have gone into determining the best approach for effectively detecting anomalies in video streams while maintaining a low false alarm rate. The results proved that approaches based on deep learning offer very interesting results in this field. In this paper, we review a family of video anomaly detection approaches based on deep learning techniques, which are compared in terms of their algorithms and models. Moreover, we have grouped state-of-the-art methods into different categories based on the approach adopted to differentiate between normal and abnormal events, and the underlying assumptions. Furthermore, we also present publicly available datasets and evaluation metrics used in existing works. Finally, we provide a comparison and discussion on the results of various approaches according to different datasets. This paper can be a good starting point for such researchers to understand this field and review existing work related to this topic.

7 citations


Journal ArticleDOI
TL;DR: In this article , the authors used several machine learning algorithms to prevent these attacks and protect the devices by obtaining related datasets from the Kaggle website for man-in-the-middle (MTM) and DoS attacks.
Abstract: Network attacks (i.e., man-in-the-middle (MTM) and denial of service (DoS) attacks) allow several attackers to obtain and steal important data from physical connected devices in any network. This research used several machine learning algorithms to prevent these attacks and protect the devices by obtaining related datasets from the Kaggle website for MTM and DoS attacks. After obtaining the dataset, this research applied preprocessing techniques like fill the missing values, because this dataset contains a lot of null values. Then we used four machine learning algorithms to detect these attacks: random forest (RF), eXtreme gradient boosting (XGBoost), gradient boosting (GB), and decision tree (DT). To assess the performance of the algorithms, there are many classification metrics are used: precision, accuracy, recall, and f1-score. The research achieved the following results in both datasets: i) all algorithms can detect the MTM attack with the same performance, which is greater than 99% in all metrics; and ii) all algorithms can detect the DoS attack with the same performance, which is greater than 97% in all metrics. Results showed that these algorithms can detect MTM and DoS attacks very well, which is prompting us to use their effectiveness in protecting devices from these attacks.

6 citations


Journal ArticleDOI
TL;DR: In this article , the authors present a review of various effective predictive analytics methods for diverse diseases like heart disease, blood pressure, and diabetes in the context of big data analytics in the healthcare industry.
Abstract: Healthcare organizations accept information technology in a management system. A huge volume of data is gathered by healthcare system. Analytics offers tools and approaches for mining information from this complicated and huge data. The extracted information is converted into data which assist decision-making in healthcare. The use of big data analytics helps achievement of improved service quality and reduces cost. Both data mining and big data analytics are applied to pharma co-vigilance and methodological perspectives. Using effective load balancing and as little resources as possible, obtained data is accessible to improve analysis. Data prediction analysis is performed throughout the patient data extraction procedure to achieve prospective outcomes. Data aggregation from huge datasets is used for patient information prediction. Most current studies attempt to improve the accuracy of patient risk prediction by using a commercial model facilitated by big data analytics. Privacy concerns, security risks, limited resources, and the difficulty of dealing with massive amounts of data have all slowed the adoption of big data analytics in the healthcare industry. This paper reviews the various effective predictive analytics methods for diverse diseases like heart disease, blood pressure, and diabetes.

5 citations


Journal ArticleDOI
TL;DR: In this article , the authors focused on the stock market analysis along with methodologies and algorithms used to understand the trends and corresponding results as part of those studies and summarized and analyzed the parameters which are highly influenced the understanding of the stock stock market trends.
Abstract: In this work we are focusing on listing out various works in the understanding of various parameters and context to get the overview of stock market analysis in the context of machine learning (ML) and deep learning (DL) models. The work focusses on the stock market analysis along with methodologies and algorithms used to understand the trends and the corresponding results as part of those studies. The importance of this work is to summarize and analyse the parameters which are highly influenced the understandingof the stock market trends. The outcome of the work is understanding the important factors which directly and indirectly influences the stock value raise and drop. The work highlights the methodologies and the algorithms used to stock market data analysis and efficient and effective recommendation of stable stocks to the customers. Further we are listing out the research gaps and future enhancements of the studies which are left over in the earlier works. The work pops up the limitations of some of the works in the existing works along with significance of hyper parameter techniques to clearly identify the features through which we can get more possibilities of better analysis of the data.

4 citations


Journal ArticleDOI
TL;DR: A review of the most recent findings about the development of carbon-based nanomaterials for use in biosensing, drug delivery, and cancer therapy, among other things, is presented in this paper .
Abstract: The development of new technologies has helped tremendously in delivering timely, appropriate, acceptable, and reasonably priced medical treatment. Because of developments in nanoscience, a new class of nanostructures has emerged. Nanomaterials, because of their small size, display exceptional physio-chemical capabilities such as enhanced absorption and reactivity, increased surface area, molar extinction coefficients, tunable characteristics, quantum effects, and magnetic and optical properties. Researchers are interested in carbon-based nanomaterials due to their unique chemical and physical properties, which vary in thermodynamic, biomechanical, electrical, optical, and structural aspects. Due to their inherent properties, carbon nanomaterials, including fullerenes, graphene, carbon nanotubes (CNTs), and carbon nanofibers (CNFs), have been intensively studied for biomedical applications. This article is a review of the most recent findings about the development of carbon-based nanomaterials for use in biosensing, drug delivery, and cancer therapy, among other things.

4 citations


Journal ArticleDOI
TL;DR: In this article , a portable healthcare system in an IoT environment controllable via a smartphone application is proposed, which can track physiological indicators of a patient's body as well as the environmental conditions where the patient lives in real-time and auto-manage databases.
Abstract: In the last decade, healthcare systems have played an effective role in improving medical services by monitoring and diagnosing patients' health remotely. These systems, either in hospitals or in other health centers, have experienced significant growth with emerging technologies. They are becoming of great interest to many countries worldwide nowadays. Portable healthcare monitoring systems (HMS) depend on internet of things (IoT) technology due to its effectiveness and reliability in several sectors, as well as in the sector of telemedicine. This paper proposes a portable healthcare system in an IoT environment controllable via a smartphone application that aims to facilitate utilization. This proposed system can track physiological indicators of a patient's body as well as the environmental conditions where the patient lives in real-time and auto-manage databases. Moreover, this paper touched on a comparison between three servers, concerning data transfer speeds from the proposed system into the servers.

3 citations


Journal ArticleDOI
TL;DR: In this paper , a real-time monitoring, control and management of fish farms is presented. But, the design of such a system is based on measuring different types of variables and using the information to control fish growth and increase productivity.
Abstract: Fish farming is still controlled and managed in the traditional way where water quality and fish feeding are manually controlled. There is a need to use computer and communication technology in fish farms for remote monitoring and control. This paper deals with the design and implementation of an internet of things (IoT) based system for real-time monitoring, control and management of fish farming. The design of such a system is based on measuring different types of variables and using the information to control fish growth and increase productivity. Each fish pond is a node in a wireless sensor network. The node contains an embedded microcontroller connected to a set of sensors and actuators and a wireless communication module. Two fuzzy controllers are designed to control the water quality in the ponds as well as the environment using five sensors in each pond plus three environmental sensors. Practical results indicate the accuracy of the measurement system compared to the results obtained from commercial devices used on the farm. These results also showed that the proposed approach achieves the best performance of the real-time monitoring and control system in fish ponds.

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors considered the fuzzy assignment problem with trapezoidal fuzzy parameters and solved it using the Dhouib-Matrix-AP1 heuristic, which is composed of three simple steps and repeated only once in n iterations.
Abstract: The assignment problem is a famous problem in combinatorial optimization where several objects (tasks) are assigned to different entities (workers) with the goal of minimizing the total assignment cost. In real life, this problem often arises in many practical applications with uncertain data. Hence, this data (the assignment cost) is usually presented as fuzzy numbers. In this paper, the assignment problem is considered with trapezoidal fuzzy parameters and solved using the novel Dhouib-Matrix-AP1 (DM-AP1) heuristic. In fact, this research work presents the first application of the DM-AP1 heuristic to the fuzzy assignment problem, and a step-by-step application of DM-AP1 is detailed for more clarity. DM-AP1 is composed of three simple steps and repeated only once in n iterations. Moreover, DM-AP1 is enhanced with two techniques: a ranking function to order the trapezoidal fuzzy numbers and the min descriptive statistical metric to navigate through the research space. DM-AP1 is developed under the Python programming language and generates a convivial assignment network diagram plan.

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed two integrated techniques, which are the internet of things (IoT) and short message service (SMS), to monitor the home for hazards to take the necessary actions, by Raspberry Pi 4 model B as a controller and phone app to monitor.
Abstract: Security and safety of homes remain critical issues in all countries. The majority of individuals have to deal with significant issues like fire and theft at some point in their lives, particularly in families that spend the majority of their time and engage in most of their activities outside the house. There is a pressing need to use cutting-edge technology in order to upgrade and strengthen the security system, as well as to remotely monitor the living environment for potential mishaps. In this paper, we proposed two integrated techniques, which are the internet of things (IoT) and short message service (SMS), to monitor the home for hazards to take the necessary actions, by Raspberry Pi 4 model B as a controller and phone app to monitor. Global system of mobile (GSM) sends SMS alerts to users, and the Blynk application monitors the data of sensors. Our outcome of this demonstrates that the proposed had the capability and high efficiency to monitor and detect undesirable situations in real-time before disasters occur.

3 citations


Journal ArticleDOI
TL;DR: In this paper , the authors used MATLAB toolboxes to simulate 2D radiation patterns in the E-plane and H-plane of a bowtie dipole antenna with an adaptive finite impulse response (FIR) filter.
Abstract: In the evolution of technology through the years, antennas are use in varying wireless systems have been in demand. Antennas play a great role in transmitting and receiving signals. As its application is heavily used in many days to day activities, it is important to create a cost-efficient and quick way to analyze its performance, characteristics, and relationship to different variables. As many radiation pattern acquisition devices are expensive, this simulation proposes a quick, reliable, and cost-friendly way to simulate 2D patterns in the E-plane and H-plane of a bowtie dipole antenna with an adaptive finite impulse response (FIR) filter. Through this study, the software MATLAB will be utilized to successfully simulate the radiation patterns of antennas with varying lengths. With the use of MATLAB toolboxes, the researchers aim to be able to compare different antenna lengths and determine the relationship and effect of it in the obtained 2D radiation pattern. If this method is successful various antenna applications may be implemented in the future with the use of 2D radiation pattern results.

Journal ArticleDOI
TL;DR: In this article , AI's various and diverse applications in the epidemic are documented in this study to help shape the future development and usage of these technologies, whether in the present or future health crises.
Abstract: Machine learning algorithms immediately became critical in the battle against the COVID-19 outbreak. Diagnoses, medicine research, an illness spread predictions, and population surveillance all required the use of artificial intelligence (AI) methods as the epidemic grew in scope. To combat COVID-19, screening procedures that are both effective and rapid are required. At COVID-19, AI developers took a chance to show how AI can benefit all mankind. It was only after the employment of AI in the battle against COVID-19. AI's various and diverse applications in the epidemic are documented in this study. It is the purpose of this study to help shape the future development and usage of these technologies, whether in the present or future health crises.

Journal ArticleDOI
TL;DR: In this article , the authors proposed two metrics for determining the connectedness of a disconnected graph of sensor nodes and determining the optimum deployment method for relay nodes in a network with the highest connectedness while staying within a budget restriction.
Abstract: Relay nodes are necessary to maintain scalability and increase longevity as the number of manufacturing industrial sensors grows. In a fixed-budget circumstance, however, the cost of purchasing the bare minimum of relay nodes to connect the network may exceed the budget. Although it is hard to establish a network that connects all sensor nodes, in this case, a network with a high level of connection is still desirable. This paper proposes two metrics for determining the connectedness of a disconnected graph of sensor nodes and determining the optimum deployment method for relay nodes in a network with the highest connectedness while staying within a budget restriction. The metrics are the number of connected graph components and the size of the most significant connected graph component. Prim's algorithm and the approximation minimum spanning tree algorithm are applied to construct a disconnected graph and discover the best relay node placement to solve these two criteria. Compared to the other metrics, simulation findings suggest that prioritizing the most significant connected components in the disconnected graph can yield superior outcomes by deploying the fewest number of relay nodes while retaining the connectedness of the graph.

Journal ArticleDOI
TL;DR: In this paper , the proposed cluster-based load-balanced protocol (DCLP) considers for the number of ideal CHs and prevents nodes nearer base stations (BSs) from joining the cluster realization for accomplishing sufficient performances regarding the reduction of sensor consumed energy.
Abstract: One of the most pressing issues in wireless sensor networks (WSNs) is energy efficiency. Sensor nodes (SNs) are used by WSNs to gather and send data. The techniques of cluster-based hierarchical routing significantly considered for lowering WSN’s energy consumption. Because SNs are battery-powered, face significant energy constraints, and face problems in an energy-efficient protocol designing. Clustering algorithms drastically reduce each SNs energy consumption. A low-energy adaptive clustering hierarchy (LEACH) considered promising for application-specifically protocol architecture for WSNs. To extend the network's lifetime, the SNs must save energy as much as feasible. The proposed developed cluster-based load-balanced protocol (DCLP) considers for the number of ideal cluster heads (CHs) and prevents nodes nearer base stations (BSs) from joining the cluster realization for accomplishing sufficient performances regarding the reduction of sensor consumed energy. The analysis and comparison in MATLAB to LEACH, a well-known cluster-based protocol, and its modified variant distributed energy efficient clustering (DEEC). The simulation results demonstrate that network performance, energy usage, and network longevity have all improved significantly. It also demonstrates that employing cluster-based routing protocols may successfully reduce sensor network energy consumption while increasing the quantity of network data transfer, hence achieving the goal of extending network lifetime.

Journal ArticleDOI
TL;DR: In this article , a dual stage cascade controller PI-(1+PD) is adopted to maintain and control temperature in greenhouse environment based on a smart and intelligent gorilla troops optimization (GTO) method for evaluating the controller gains to enhance the system response by reducing the error value and minimizing the integral time absolute error (ITAE) fitness functions during simulation.
Abstract: In this paper, a dual stage cascade controller PI-(1+PD) is adopted to maintain and control temperature in greenhouse environment based on a smart and intelligent gorilla troops optimization (GTO) method for evaluating the controller gains to enhance the system response by reducing the error value and minimize the integral time absolute error (ITAE) fitness functions during simulation. The simulation results are obtained by using MATLAB 2019, then compared with two conventional controllers proportional integral derivative (PI and PID) based on evaluation parameters for all controllers in term of peak time, rise time, settling time and overshoot to show its efficient response if compared with other controllers used.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an optical ring network-on-chip (ORNoC) architecture which is contention free, where the communication matrix is used to assign a single waveguide/wavelength pair to implement simultaneous communications.
Abstract: Network on chip (NoC) technology has now achieved a mature stage of development as a result of their use as a key component in many successful commercial devices. As multiprocessors continue to scale, these ship based electronic networks are more challenging to meet their power budget communication requirements. Innovative technology is emerging with the aim of offering shorter latencies and greater bandwidth with lower power consumption. Ring topology provides superior results among the all wavelength routed topologies in the chip optical network. In this paper, we proposed an optical ring network-on-chip (ORNoC) architecture which is contention free. Communication matrix is used to assign a single waveguide/wavelength pair to implement simultaneous communications. The design constraints for the proposed architecture will be wavelength reused on a single waveguide for multiple communications. We imply automatic wavelength/waveguide assignment for effective design and will prove that the proposed architecture can connect more number of nodes and less wavelengths per waveguide.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors conducted a bibliometric examination of privacy using the Scopus dataset and found that 1,226 articles on privacy and COVID-19 were published by authors from 69 countries in this year's issue.
Abstract: 1,226 articles on privacy and COVID-19 were published by authors from 69 countries in this year's issue. COVID 19's privacy is now the focus of many researchers' attention. The present body of knowledge on privacy for COVID-19 digital technologies has been thoroughly analyzed, and a concise overview of research status and future developments can be gleaned. This paper conducted a bibliometric examination of privacy using the Scopus dataset. Utilizing VOSviewer software, the relevant literature papers published on this topic were examined to determine the field's development history, research hotspots, and future directions. Over time, there has been a rise in the number of studies published in privacy for COVID-19, particularly after 2020, and the growth rate has been steadily increasing. Regarding published research, the United States and China lead the pack. These articles appeared in primarily English-language journals and conference proceedings. Privacy and COVID-19 research was mostly computer science. The most used terms in privacy and COVID-19 were data privacy and humans. This paper examines the evolution of privacy and COVID-19 research and indicates current research priorities and future research goals. Furthermore, the privacy and COVID-19 study seem to be a promising sphere as this study identifies 26 domains.

Journal ArticleDOI
TL;DR: In this article , an EEG-based HMI system is proposed to assist patients with tetraplegia/quadriplegias to mentally control a motorized wheelchair so they can move freely and independently.
Abstract: Human machine interaction (HMI) allows persons to control and interact with devices. Starting from elementary apparatus which acquires input bio-signals to controlling various applications. Medical applications are amongst the very important applications of HMI. One of these medical applications is assisting fully/partially paralyzed patients to restore movements or freely move using exoskeletons or motorized wheelchairs. Helping patients with spinal cord injury or serious neurological diseases to restore their movements is a key role objective for most researchers in this field. In this paper, an EEG-based HMI system is proposed to assist patients with tetraplegia/quadriplegia to mentally control a motorized wheelchair so they can move freely and independently. EEG power spectrum (α, β, δ, θ, and γ) from the frontal lobe of brain is recorded, filtered and wirelessly sent to the wheelchair to control directions and engine status. Four different experiments were conducted using the proposed system in order to validate the performance. Two different GUIs scenarios (cross-shaped and horizontal bar) were used with the experiments. Results showed that the horizontal bar scenario considered more user friendly while the cross-shaped is the more suitable for navigation. The implemented system can be equipped with modules and sensors such as GPS, ultrasound and accelerometer that improve the system performance and reliability.

Journal ArticleDOI
TL;DR: In this paper , selected propagation models suitable with 2.5GHz, such as Friis Free Space Propagation Model (FSPL), Sandford University Interim (SUI), Ericsson, Okumura, and COST-231 Hata models, are utilized for evaluation and compared with empirical data collected from long-term evolution (LTE) networks in urban areas.
Abstract: As demand for mobile wireless network services continues to rise, network planning and optimization significantly affect development. One of the critical elements in network planning is predicting pathloss. Thus, propagation models predict pathloss in indoor and outdoor environments. Choosing the appropriate propagation model for the area out of existing models is essential for network planning. Selected propagation models suitable with 2.5GHz, such as Friis Free Space Propagation Model (FSPL), Sandford University Interim (SUI), Ericsson, Okumura, and COST-231 HATA models, are utilized for evaluation and compared with empirical data collected from long-term evolution (LTE) networks in urban areas. The best acceptable model is chosen based on statistical results such as mean, standard deviation, and root mean square errors (RMSE). The analytical results show Cost-231 Hata model fits the empirical pathloss with a minimum RMSE of 5.27 dB.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a long short-term memory (LSTM) neural network model for signature verification, with input data from the USTig and CEDAR datasets.
Abstract: The signing process is a critical step that organizations take to ensure the confidentiality of their data and to safeguard it against unauthorized penetration or access. Within the last decade, offline handwritten signature research has grown in popularity as a common method for human authentication via biometric features. It is not an easy task, despite the importance of this method; the struggle in such a system stem from the inability of any individual to sign the same signature each and every time. Additionally, we are indeed interested in the dataset’s features that could affect the model's performance; thus, from extracted features from the signature images using the histogram orientation gradient (HOG) technique. In this paper, we suggested a long short-term memory (LSTM) neural network model for signature verification, with input data from the USTig and CEDAR datasets. Our model’s predictive ability is quite outstanding: The classification accuracy efficiency LSTM for USTig was 92.4% with a run-time of 1.67 seconds and 87.7% for CEDAR with a run-time of 2.98 seconds. Our proposed method outperforms other offline signature verification approaches such as K-nearest neighbour (KNN), support vector machine (SVM), convolution neural network (CNN), speeded-up robust features (SURF), and Harris in terms of accuracy.

Journal ArticleDOI
TL;DR: In this article , the authors presented a web-based water quality monitoring and forecasting system for aquaculture, where an Arduino and Raspberry Pi based water quality data acquisition tool is built.
Abstract: Water quality in fish tanks is essential to reduce fish mortality. Many factors affect the water quality, such as pH, dissolved oxygen, and temperature in fish tanks. Existing work has presented water quality monitoring systems for aquaculture, which are useful for automatic monitoring and notify any incidence of decline in water quality. It enables the fish farms to make interventions to reduce fish mortality. However, advanced monitoring through forecasting is necessary to ensure consistent optimum water quality. This paper presents a web-based water quality monitoring and forecasting system for aquaculture. First, a water quality forecasting model based on the long short-term memory is designed and developed. The model is evaluated and fine-tuned using the existing public dataset. Second, the prototype of the water quality monitoring and forecasting system is developed. An Arduino and Raspberry Pi based water quality data acquisition tool is built. A web-based application is then developed to present the monitoring data and forecasting. A notification module is included to send an alert message to the fish farmers when necessary. The system is tested and evaluated at the fish hatchery in Universiti Malaysia Sabah. The findings show that the proposed system provides better water quality management for fish farms.

Journal ArticleDOI
TL;DR: In this paper , a photovoltaic (PV) system-based fish dryer was developed that can assist the fish drying process, which can generate 402.78 Wh of electrical energy per day.
Abstract: The abundance of fish catches in Indonesia is excellent potential. Still, if the abundant results cannot be adequately managed and are just wasted, it will eventually lead to bad things. This problem was also found in Seraya village, Karangasem, Bali, where large fish yields and the fish processing process were still constrained by weather and environmental conditions causing the expected results to not be achieved. To overcome this, a photovoltaic (PV) system-based fish dryer was developed that can assist the fish drying process. Utilization of this system is also supported by good solar energy potential. The system can generate 402.78 Wh of electrical energy per day, covering 104.89% of the electrical energy demand of the fish dryer. The results of statistical tests using the Mann-Whitney test for fish weight and unpaired t-test for fish moisture content showed no significant results (p0.05). This value states that there is no difference in the results of drying fish with the PV system and the traditional method. From this, we can conclude that fish drying using a solar power system works similarly to conventional fish drying methods.

Journal ArticleDOI
TL;DR: In this article , an acoustic study of Arabic vowels was conducted in order to determine the most relevant characteristics that allow recognizing these vowels, and the obtained results were exploited to develop algorithms that allow the classification of vowels and distinction of the long vowels from the short ones.
Abstract: The main objective of this work is to conduct an acoustic study of Arabic vowels (/a/, /a:/, /u/, /u:/, /i/ and /i:/) in order to determine the most relevant characteristics that allow recognizing these vowels. The analysis of vowel spectrograms reveals that the energy distribution as a function of time and frequency clearly differs according to the considered vowel. Thus, we used the normalized energy in frequency bands to classify these vowels. Thereafter, we have exploited the obtained results to develop algorithms that allow the classification of vowels and the distinction of the long vowels from the short ones. The efficiency of these algorithms was evaluated by testing their performances on our Arabic corpus.

Journal ArticleDOI
TL;DR: In this paper , support vector machine (SVM), Naïve Bayes (NB), K nearest neighbor (KNN), random forest (RF), logistic regression (LR), and decision tree (DT), according to studied work.
Abstract: Diabetes mellitus (DM) is a serious worldwide health issue, and its prevalence is rapidly growing. It is a spectrum of metabolic illnesses defined by perpetually increased blood glucose levels. Undiagnosed diabetes can lead to a variety of problems, including retinopathy, nephropathy, neuropathy, and other vascular abnormalities. In this context, machine learning (ML) technologies may be particularly useful for early disease identification, diagnosis, and therapy monitoring. The core idea of this study is to identify the strong ML algorithm to predict it. For this several ML algorithms were chosen i.e., support vector machine (SVM), Naïve Bayes (NB), K nearest neighbor (KNN), random forest (RF), logistic regression (LR), and decision tree (DT), according to studied work. Two, Pima Indian diabetic (PID) and Germany diabetes datasets were used and the experiment was performed using Waikato environment for knowledge analysis (WEKA) 3.8.6 tool. This article discussed about performance matrices and error rates of classifiers for both datasets. The results showed that for PID database (PIDD), SVM works better with an accuracy of 74% whereas for Germany KNN and RF work better with 98.7% accuracy. This study can aid healthcare facilities and researchers in comprehending the value and application of ML algorithms in predicting diabetes at an early stage.

Journal ArticleDOI
TL;DR: In this article , the authors summarized the current research status of fault-tolerant operation of existing offshore wind and wind complementary generators in terms of software fault tolerance and hardware fault tolerance.
Abstract: Due to the poor accessibility, poor operating conditions, high failure rate, long maintenance time, and difficult maintenance of wind hybrid generators, the economic loss is huge once the failure stops. To this end, the fault adaptive fault-tolerant control of distributed wind and wind hybrid generators is studied, the historical operation data of offshore wind and wind hybrid generators and onshore wind and wind hybrid generators are counted and compared, and the fault characteristics of key components of offshore wind and wind hybrid generators are analyzed. The generator sets are summarized, and the common electrical faults of wind turbines and their impacts on the system are analyzed. This paper summarizes the current research status of fault-tolerant operation of existing offshore wind and wind complementary generators in terms of software fault tolerance and hardware fault tolerance, summarizes the current fault tolerance schemes for offshore wind and wind complementary generators, and analyzes the application feasibility of existing fault tolerance schemes. In addition, the main problems of fault-tolerant offshore wind and solar complementary generator sets are pointed out, and future research hotspots are foreseen.

Journal ArticleDOI
TL;DR: In this paper , a tree-based ensemble model known as XGBoost was proposed to detect application layer DDoS attacks. But the performance of XGB-DDoS did not show that it is good at detecting application layer attacks.
Abstract: The increasing advancement of technologies and communication infrastructures has been posing threats to the internet services. One of the most powerful attack weapons for disrupting web-based services is the distributed denial of service (DDoS) attack. The sophisticated nature of attack tools being created and used for launching attacks on target systems makes it difficult to distinguish between normal and attack traffic. Consequently, there is a need to detect application layer DDoS attacks from network traffic efficiently. This paper proposes a detection system coined eXtreme gradient boosting (XGB-DDoS) using a tree-based ensemble model known as XGBoost to detect application layer DDoS attacks. The Canadian institute for cybersecurity intrusion detection systems (CIC IDS) 2017 dataset consisting of both benign and malicious attacks was used in training and testing of the proposed model. The performance results of the proposed model indicate that the accuracy rate, recall, precision rate, and F1-score of XGB-DDoS are 0.999, 0.997, 0.995, and 0.996, respectively, as against those of k-nearest neighbor (KNN), support vector machine (SVM), principal component analysis (PCA) hybridized with XGBoost, and KNN with SVM. So, the XGB-DDoS detection model did better than the models that were chosen. This shows that it is good at finding application layer DDoS attacks.

Journal ArticleDOI
TL;DR: In this paper , the authors attempted to create a tool or indicator that can gather tweets in real-time using tweepy and the Twitter application programming interface (API) and report the sentiment at the time.
Abstract: As opposed to other fiat currencies, bitcoin has no relationship with banks. Its price fluctuation is largely influenced by fresh blocks, news, mining information, support or resistance levels, and public opinion. Therefore, a machine-learning model will be fantastic if it learns from data and tells or indicates if we need to purchase or sell for a little period. In this study, we attempted to create a tool or indicator that can gather tweets in real-time using tweepy and the Twitter application programming interface (API) and report the sentiment at the time. Using the renowned Python module "FBProphet," we developed a model in the second phase that can gather historical price data for the bitcoin to US dollar (BTCUSD) pair and project the price of bitcoin. In order to provide guidance for an intelligent forex trader, we finally merged all of the models into one form. We traded with various models for a very little number of days to validate our bitcoin trading indicator (BTI), and we discovered that the combined version of this tool is more profitable. With the combined version of the instrument, we quickly and with little error root mean square error (RMSE: 1,480.58) generated a profit of $1,000.71 USD.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a DL-based surveillance system that can detect the presence of tracked objects, such as handheld firearms and bladed weapons, as well as may proceed to alert authorities regarding eventual threats before an incident occurs.
Abstract: In recent years, the use of artificial intelligence (AI) for image and video-based crime detection has gained significant attention from law enforcement agencies and security experts. Indeed, deep learning (DL) models can learn complex patterns from data and help law enforcement agencies save time and resources by automatically identifying and tracking potential criminals. This contributes to make deep investigations and better steer their targets’ searches. Among others, handheld firearms and bladed weapons are the most frequent objects encountered at crime scenes. In this paper, we propose a DL-based surveillance system that can detect the presence of tracked objects, such as handheld firearms and bladed weapons, as well as may proceed to alert authorities regarding eventual threats before an incident occurs. After making a comparison of different DL-based object detection techniques, such as you only look once (YOLO), single shot multibox detector (SSD), or faster region-based convolutional neural networks (R-CNN), YOLO achieves the optimal balance of mean average precision (mAP) and inference speed for real-time prediction. Thus, we retain YOLOv5 for the implementation of our solution.

Journal ArticleDOI
TL;DR: In this paper , a sliding mode control (SMC) for flexible-joint manipulators based on serial invariant manifolds is proposed to increase the control quality for the system.
Abstract: This paper focuses on synthesizing sliding mode control (SMC) for flexible-joint manipulators (FJM) based on serial invariant manifolds in order to increase the control quality for the system. SMC based on the serial invariant manifolds is proposed. The control law is found based on synergetic control theory (SCT) and analytical design of aggregated regulators (ADAR) method. In order to improve the control quality due to the effect of the stiffness value between two links in the system, a mechanism for constructing manifolds is built. The time response of the outer loop manifolds close to the actuator will be larger in the next round. The control quality of the system can be pre-evaluated through the parameters of the designed manifolds. Global stability is demonstrated by using the Lyapunov function in the design process. Finally, the effectiveness of the proposed controller based on SCT is demonstrated by numerical simulation results and compared with the traditional SMC.

Journal ArticleDOI
TL;DR: In this paper , a multimodal music emotion recognition (MER) system using Indonesian song and lyrics data is proposed, where the audio data will use the mel spectrogram feature, and the lyrics feature will be extracted by going through the tokenizing process from XLNet.
Abstract: Music carries emotional information and allows the listener to feel the emotions contained in the music. This study proposes a multimodal music emotion recognition (MER) system using Indonesian song and lyrics data. In the proposed multimodal system, the audio data will use the mel spectrogram feature, and the lyrics feature will be extracted by going through the tokenizing process from XLNet. Convolutional long short term memory network (CNN-LSTM) performs the audio classification task, while XLNet transformers performs the lyrics classification task. The outputs of the two classification tasks are probability weight and actual prediction with the value of positive, neutral, and negative emotions, which are then combined using the stacking ensemble method. The combined output will be trained into an artificial neural network (ANN) model to get the best probability weight output. The multimodal system achieves the best performance with an accuracy of 80.56%. The results showed that the multimodal method of recognizing musical emotions gave better performance than the single modal method. In addition, hyperparameter tuning can affect the performance of multimodal systems.