scispace - formally typeset
Search or ask a question

Showing papers in "Concurrency and Computation: Practice and Experience in 2022"


Journal ArticleDOI
TL;DR: There is still a lack of a comprehensive and systematic study about surveying and analyzing the available energy‐efficient data fusion techniques in the IoT and this article aims to address this gap using a systematic method.
Abstract: Nowadays, with the rapid progress of Internet‐based and distributed systems such as cloud computing, peer‐to‐peer networking, and Internet of Things (IoT), significant improvements in almost every engineering and commercial field have been made. On the basis of IoT, smart cities are formed utilizing intelligent information processing, universal connectivity, ubiquitous sensing, and real‐time monitoring. Energy conservation is one of the significant issues in current IoT development due to the poor battery endurance of IoT objects. Over the last years, with smart cities' explosive growth, a huge range of studies regarding energy efficiency have been done. The diversity of sparse data and multi‐sourcing is utilized in developing IoT scenarios. In order to use efficiently of these data to improve the IoT services, data fusion plays an important role. It saves network resources, improves data transmission efficiency, and extracts useful information from raw data. To the best of our knowledge, there is still a lack of a comprehensive and systematic study about surveying and analyzing the available energy‐efficient data fusion techniques in the IoT. Thus, this article aims to address this gap using a systematic method.

19 citations


Journal ArticleDOI
TL;DR: In this paper , a computationally efficient and numerically stable recurrence algorithm is proposed for high order of moment, which is based on combining two recurrence algorithms, which are the recurrence relations in the n$$ n $$ and x$$ x $$ directions.
Abstract: Discrete Tchebichef polynomials (DTPs) and their moments are effectively utilized in different fields such as video and image coding, pattern recognition, and computer vision due to their remarkable performance. However, when the moments order becomes large (high), DTPs prone to exhibit numerical instabilities. In this article, a computationally efficient and numerically stable recurrence algorithm is proposed for high order of moment. The proposed algorithm is based on combining two recurrence algorithms, which are the recurrence relations in the n$$ n $$ and x$$ x $$ ‐directions. In addition, an adaptive threshold is used to stabilize the generation of the DTP coefficients. The designed algorithm can generate the DTP coefficients for high moment's order and large signal size. By large signal size, we mean the samples of the discrete signal are large. To evaluate the performance of the proposed algorithm, a comparison study is performed with state‐of‐the‐art algorithms in terms of computational cost and capability of generating DTPs with large polynomial size and high moment order. The results show that the proposed algorithm has a remarkably low computation cost and is numerically stable, where the proposed algorithm is 27 ×$$ \times $$ times faster than the state‐of‐the‐art algorithm.

18 citations


Journal ArticleDOI
TL;DR: In this article , the negative effect of land use and climate changes on water resources by the SWAT and DEEP∖LMSFO model was investigated, and the results showed that climate change and land use change can affect annual runoff changes in the coming years.
Abstract: This article investigates the negative effect of land use and climate changes on water resources by the SWAT and SWAT‐DEEP∖LMSFO model. Due to the importance of runoff impact on water resources in this article, the hybrid hydrological–deep neural networks optimized by the improved SFO based on logistic map (LMSFO) algorithm have been used to provide more accurate results for runoff estimation. This method improves runoff simulation. Firstly, runoff under the influence of land use and climate change is estimated by the SWAT model. Once again, runoff is estimated by the SWAT‐DEEP/LMSFO model. In the DEEP/LMSFO model, the primary runoff has been estimated by an un‐calibrated SWAT model. Then the Primary runoff simulated is entered as an input into the DEEP model. Finally, the runoff is estimated by a test‐training method. The results of the SWAT model and the DEEP/LMSFO model show that there is an inverse correlation between land use so that reducing land cover increases runoff. The results show that climate change and land‐use change can affect annual runoff changes in the coming years.

16 citations


Journal ArticleDOI
TL;DR: The authors of this research describe an Energy Aware Clustering and Multihop Routing Protocol with mobile sink (EACMRP‐MS) technique for IoT supported WSN, whose purpose is to efficiently reduce the energy consumption of IoT sensor nodes, consequently increasing the network efficiency of the IoT system.
Abstract: Because of recent breakthroughs in information technology, the Internet of Things (IoT) is becoming increasingly popular in a variety of application areas. Wireless sensor networks (WSN) are a critical component of IoT systems, and they consist of a collection of affordable and compact sensors that are utilized for data collecting. WSNs are used in a variety of IoT applications, such as surveillance, detection, and tracking systems, to sense the surroundings and transmit the information to the user's device. Smart gadgets, on the other hand, are limited in terms of resources, such as electricity, bandwidth, memory, and computation. A fundamental issue in the IoT‐based WSN is to achieve energy efficiency while also extending the network's lifetime, which is one of the limits that must be overcome. As a result, energy‐efficient clustering and routing algorithms are frequently employed in the IoT system. As a result of this inspiration, the authors of this research describe an Energy Aware Clustering and Multihop Routing Protocol with mobile sink (EACMRP‐MS) technique for IoT supported WSN. The EACMRP‐MS technique's purpose is to efficiently reduce the energy consumption of IoT sensor nodes, consequently increasing the network efficiency of the IoT system. The suggested EACMRP‐MS technique initially relies on the Tunicate Swarm Algorithm (TSA) for cluster head (CH) selection and cluster assembly, as well as the TSA. Furthermore, the type‐II fuzzy logic (T2FL) technique is used for the optimal selection of multi‐hop routes, with multiple input parameters being used to achieve this. Finally, a mobile sink with route adjustment scheme is presented to further increase the energy efficiency of the IoT system. This scheme allows for the adjustment of routes based on the trajectory of the mobile sink, which further improves the energy efficiency of the system. Using a detailed experimental analysis and simulation findings, it was discovered that the EACMRP‐MS technique outperformed the most recent state of the art methods in terms of a variety of evaluation metrics, indicating that it is a promising alternative.

13 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed two robust ridge estimators to deal with the two problems simultaneously in the Poisson regression model (PRM), namely, the robust Poisson ridge regression (RPRR) estimator and the robust poisson almost unbiased ridge (RPAUR) estimators.
Abstract: The Poisson regression model (PRM) is the standard statistical method of analyzing count data, and it is estimated by a Poisson maximum likelihood (PML) estimator. Such an estimator is affected by outliers, and some robust Poisson regression estimators have been proposed to solve this problem. PML estimators are also influenced by multicollinearity. Biased Poisson regression estimators have been developed to address this problem, including Poisson ridge regression and Poisson almost unbiased ridge estimators. However, the above mentioned estimators do not deal with outliers and multicollinearity problems together in a PRM. Therefore, we propose two robust ridge estimators to deal with the two problems simultaneously in the PRM, namely, the robust Poisson ridge regression (RPRR) estimator and the robust Poisson almost unbiased ridge (RPAUR) estimator. Theoretical comparisons and Monte‐Carlo simulations are conducted to investigate the performance of the proposed estimators relative to the performance of other approaches to PRM parameter estimation. The simulation results indicate that the RPAUR estimator outperforms the other estimators in all situations where both problems exist. Finally, real data are used to confirm the results of this paper.

13 citations


Journal ArticleDOI
TL;DR: This model combines the three most effective feature selection techniques (gain‐ratio, chi‐squared, and information gain) to offer a qualifying result and four top classifiers (SVM, LR, NB, and DT) using enhanced weighted majority voting to provide efficient and accurate intrusion detection.
Abstract: Cloud computing security is the most critical factor for providers, cloud users, and organizations. The various novel approaches apply host‐based or network‐based methods to increase cloud security performance and detection rate. However, due to the virtual and distributed environment of the cloud, conventional network intrusion detection systems (NIDS) have been unreliable in handling these security attacks. Therefore, we design a methodology that incorporates feature selection and classification using ensemble techniques to provide efficient and accurate intrusion detection to address these problems. This proposed model combines the three most effective feature selection techniques (gain‐ratio, chi‐squared, and information gain) to offer a qualifying result and four top classifiers (SVM, LR, NB, and DT) using enhanced weighted majority voting. Moreover, we proposed an experimental technique using a new dataset called Honeypot. All experiments utilized three datasets: Honeypots, Kyoto, and NSL: KDD. In addition, the results of this experimental study were compared with other approaches and performed the statistical significance analysis. Finally, the results reveal that the proposed intrusion detection based on the Honeypot dataset was better and more efficient than other methods because we have an accuracy of 98.29%, FAR of 0.012%, DR of 97.9%, and AUC = 0.9921.

12 citations


Journal ArticleDOI
TL;DR: The proposed BiLSTM model showed significant improvement over traditional ML classifiers and applied the UCI datasets, and the results showed optimal performance while classifying SMS spam messages based on some metrics: accuracy, precision, recall, and F‐measure.
Abstract: SMS, one of the most popular and fast‐growing GSM value‐added services worldwide, has attracted unwanted SMS, also known as SMS spam. The effects of SMS spam are significant as it affects both the users and the service providers, causing a massive gap in trust among both parties. This article presents a deep learning model based on BiLSTM. Further, it compares our results with some of the states of the art machine learning (ML) algorithm on two datasets: our newly collected dataset and the popular UCI SMS dataset. This study aims to evaluate the performance of diverse learning models and compare the result of the new dataset expanded (ExAIS_SMS) using the following metrics the true positive (TP), false positive (FP), F‐measure, recall, precision, and overall accuracy. The average accuracy for the BiLSTSM model achieved moderately improved results compared to some of the ML classifiers. The experimental results achieved significant improvement from the ground truth results after effective fine‐tuning of some of the parameters. The BiLSTM model using the ExAIS_SMS dataset attained an accuracy of 93.4% and 98.6% for UCI datasets. Further comparison of the two datasets on the state‐of‐the‐art ML classifiers gave an accuracy of Naive Bayes, BayesNet, SOM, decision tree, C4.5, J48 is 89.64%, 91.11%, 88.24%, 75.76%, 80.24%, and 79.2% respectively for ExAIS_SMS datasets. In conclusion, our proposed BiLSTM model showed significant improvement over traditional ML classifiers. To further validate the robustness of our model, we applied the UCI datasets, and our results showed optimal performance while classifying SMS spam messages based on some metrics: accuracy, precision, recall, and F‐measure.

11 citations


Journal ArticleDOI
TL;DR: This research designed anti‐corona virus‐Henry gas solubility optimization‐based deep max out network (ACV‐HGSO based deep maxout network) for lung cancer detection with medical data in a smart IoT environment by incorporating anti‐ Corona virus optimization (ACVO) and Henry gas Solubility Optimization (H GSO).
Abstract: The Internet of Things (IoT) has appreciably influenced the technology world in the context of interconnectivity, interoperability, and connectivity using smart objects, connected sensors, devices, data, and appliances. The IoT technology has mainly impacted the global economy, and it extends from industry to different application scenarios, like the healthcare system. This research designed anti‐corona virus‐Henry gas solubility optimization‐based deep maxout network (ACV‐HGSO based deep maxout network) for lung cancer detection with medical data in a smart IoT environment. The proposed algorithm ACV‐HGSO is designed by incorporating anti‐corona virus optimization (ACVO) and Henry gas solubility optimization (HGSO). The nodes simulated in the smart IoT framework can transfer the patient medical information to sink through optimal routing in such a way that the best path is selected using a multi‐objective fractional artificial bee colony algorithm with the help of fitness measure. The routing process is deployed for transferring the medical data collected from the nodes to the sink, where detection of disease is done using the proposed method. The noise exists in medical data is removed and processed effectively for increasing the detection performance. The dimension‐reduced features are more probable in reducing the complexity issues. The created approach achieves improved testing accuracy, sensitivity, and specificity as 0.910, 0.914, and 0.912, respectively.

11 citations


Journal ArticleDOI
TL;DR: A dynamic topology control algorithm for node deployment (DTCND) in mobile UWSN is proposed to monitor the node mobility to predict nodes' location for ensuring coverage and connectivity.
Abstract: Sensors in underwater wireless sensor networks (UWSNs) can drift up to 3 m/s due to ocean currents, marine organisms, or passing vessels. In existing node deployment and localization techniques, node mobility and network disconnections are not taken into account. This article proposes a dynamic topology control algorithm for node deployment (DTCND) in mobile UWSN. This work aims to monitor the node mobility to predict nodes' location for ensuring coverage and connectivity. The sensor nodes are deployed randomly at different depths. The anchor nodes observe the signal quality index, energy drain rate, and node density at every time interval and detect node disconnections based on the variations in these observed metrics. After receiving the beacon messages from the anchor nodes, the courier nodes incline to move toward the target region for satisfying the coverage and connectivity constraints. Simulation results show that the proposed algorithm attains 5% higher connectivity when compared to energy‐efficient localization algorithm (EELA) and 9% higher connectivity when compared to adaptive triangular deployment algorithm (ATDA). The residual energy is also higher by 3% and 18% when compared to EELA and ATDA, respectively. The deployment cost and delay of DTCND also decrease when compared to EELA and ATDA, which results into efficient data collection.

10 citations


Journal ArticleDOI
TL;DR: A systematic review about the two most used hyperspectral classification techniques, SVM and CNN is given, where the methodologies used by different authors, datasets used, results acquired, contribution and shortcomings have been put forth.
Abstract: Various machine learning and deep learning techniques have been proposed for classification purposes in the case of hyperspectral imaging. Among all the machine learning techniques support vector machine (SVM) has been a promising classification algorithm in the case of remote sensing applications, particularly in the field of hyperspectral imaging. In the case of deep learning techniques, the convolutional neural network is gaining much attention from researchers for classification purposes and has given reliable results in this field, thus is being used widely by researchers. So, in this article, a systematic review about the two most used hyperspectral classification techniques that is, SVM and CNN is given. A total of 86 papers from the four well‐known journals belonging to the field of remote sensing have been reviewed where the methodologies used by different authors, datasets used, results acquired, contribution and shortcomings have been put forth. This meta‐analysis generally focuses on the recent SVM and CNN methodologies that are proposed by various authors. A summary of the best results obtained from SVM and CNN in terms of classification accuracies is also provided to help the researchers working in this area achieve better results.

10 citations


Journal ArticleDOI
TL;DR: The article proposes the integration of a permissioned blockchain within an honest‐but‐curious (i.e., not trusted) IoT distributed middleware layer, which aims to guarantee the correct management of access to resources by the interested parties, and produces a robust and lightweight system.
Abstract: Security and privacy of information transmitted among the devices involved in an Internet of Things (IoT) network represent relevant issues in IoT contexts. Guaranteeing effective control and supervising access permissions to IoT applications is a complex task, mainly due to resources' heterogeneity and scalability requirements. The design and development of highly customizable access control policies, along with an efficient mechanism for ensuring that the rules applied by the IoT platform are not tampered with or violated, will undoubtedly have a significant impact on the diffusion of IoT‐based solutions. In such a direction, the article proposes the integration of a permissioned blockchain within an honest‐but‐curious (i.e., not trusted) IoT distributed middleware layer, which aims to guarantee the correct management of access to resources by the interested parties. The result is a robust and lightweight system, able to manage the data produced by IoT devices, support relevant security features, such as integrity and confidentiality, and resist different kinds of attacks. The use of blockchain will ensure the tamper‐resistance and synchronization of the distributed system, where various stakeholders own applications and IoT platforms. The methodology and the proposed architecture are validated employing a test‐bed.

Journal ArticleDOI
TL;DR: In this paper , a multi-scale feature fusion meter target detection algorithm is proposed to address the problems of low efficiency and susceptibility to surrounding environmental factors by the traditional manual meter reading method.
Abstract: With the promotion of smart grid construction work, the use of high‐precision and high‐efficiency substation inspection robot has become the development trend of substation inspection. A multi‐scale feature fusion meter target detection algorithm is proposed to address the problems of low efficiency and susceptibility to surrounding environmental factors by the traditional manual meter reading method. Kinecct is used to acquire color images of substation meters with different backgrounds, light intensities, and angles to build a substation meter dataset. Based on the complementarity and correlation of multi‐scale features, an SSD target detection model with multi‐scale feature fusion is established, and the performance of the algorithm is tested on the constructed dataset, and comparative experiments are conducted to verify the effectiveness of the algorithm for target detection accuracy improvement.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that analyzing similar friendships could make recommendations more accurate and the suggested model for recommending a sequence of top‐k POIs outperforms state‐of‐the‐art approaches.
Abstract: Today, millions of active users spend a percentage of their time on location‐based social networks like Yelp and Gowalla and share their rich information. They can easily learn about their friends' behaviors and where they are visiting and be influenced by their style. As a result, the existence of personalized recommendations and the investigation of meaningful features of users and Point of Interests (POIs), given the challenges of rich contents and data sparsity, is a substantial task to accurately recommend the POIs and interests of users in location‐based social networks (LBSNs). This work proposes a novel pipeline of POI recommendations named DeePOF based on deep learning and the convolutional neural network. This approach only takes into consideration the influence of the most similar pattern of friendship instead of the friendship of all users. The mean‐shift clustering technique is used to detect similarity. The most similar friends' spatial and temporal features are fed into our deep CNN technique. The output of several proposed layers can predict latitude and longitude and the ID of subsequent appropriate places, and then using the friendship interval of a similar pattern, the lowest distance venues are chosen. This combination method is estimated on two popular datasets of LBSNs. Experimental results demonstrate that analyzing similar friendships could make recommendations more accurate and the suggested model for recommending a sequence of top‐k POIs outperforms state‐of‐the‐art approaches.

Journal ArticleDOI
TL;DR: In this paper , the authors advocate how the co-creation driven approach promoted by D4Science has proven to be effective and discuss how diverse communities of practice have exploited these options and give some usage indicators on the created VREs.
Abstract: Virtual research environments are systems called to serve the needs of their designated communities of practice. Every community of practice is a group of people dynamically aggregated by the willingness to collaborate to address a given research question. The virtual research environment provides its users with seamless access to the resources of interest (namely, data and services) no matter what and where they are. Developing a virtual research environment thus to guarantee its uptake from the community of practice is a challenging task. In this article, we advocate how the co‐creation driven approach promoted by D4Science has proven to be effective. In particular, we present the co‐creation options supported, discuss how diverse communities of practice have exploited these options, and give some usage indicators on the created VREs.

Journal ArticleDOI
TL;DR: In this article , the link prediction problem for the same user in a two-layer social network is examined, where they consider Twitter and Foursquare networks, and they use information related to the two layer communication is used to predict links in the four-square network.
Abstract: Online social networks are an integral element of modern societies and significantly influence the formation and consolidation of social relationships. In fact, these networks are multi-layered so that there may be multiple links between a user' on different social networks. In this article, the link prediction problem for the same user in a two-layer social network is examined, where we consider Twitter and Foursquare networks. Here, information related to the two-layer communication is used to predict links in the Foursquare network. Link prediction aims to discover spurious links or predict the emergence of future links from the current network structure. There are many algorithms for link prediction in unweighted networks, however only a few have been developed for weighted networks. Based on the extraction of topological features from the network structure and the use of reliable paths between users, we developed a novel similarity measure for link prediction. Reliable paths have been proposed to develop unweight local similarity measures to weighted measures. Using these measures, both the existence of links and their weight can be predicted. Empirical analysis shows that the proposed similarity measure achieves superior performance to existing approaches and can more accurately predict future relationships. In addition, the proposed method has better results compared to single-layer networks. Experiments show that the proposed similarity measure has an advantage precision of 1.8% over the Katz and FriendLink measures.

Journal ArticleDOI
TL;DR: A new feature selection method, namely class‐index corpus‐index measure (CiCi) was presented for unbalanced text classification, a probabilistic method which is calculated using feature distribution in both class and corpus.
Abstract: In the field of text classification, some of the datasets are unbalanced datasets. In these datasets, feature selection stage is important to increase performance. There are many studies in this area. However, existing methods have been developed based on the document frequency of only intra‐class. In this study, a new method is proposed considering the situation of the feature in class and corpus. A new feature selection method, namely class‐index corpus‐index measure (CiCi) was presented for unbalanced text classification. The CiCi is a probabilistic method which is calculated using feature distribution in both class and corpus. It has shown a higher performance compared to successful methods in the literature. Multinomial Naïve Bayes and support vector machines were used as classifiers in the experiments. Three different unbalanced datasets are used in the experiments. These benchmark datasets are reuters‐21578, ohsumed, and enron1. Experimental results show that the proposed method has more performance in terms of three different success measures.

Journal ArticleDOI
TL;DR: An innovative image retrieval agenda by concatenating deep learning features from GoogleNet and low‐level features from HSI and RGB color space and an improved form of dot‐diffused block truncation coding is used for extracting RGB handcraft features.
Abstract: An innovative image retrieval agenda by concatenating deep learning features from GoogleNet and low‐level features from HSI and RGB color space is proposed in this article. Most of the CNN features suffer from loss of information due to image resize as a pre‐processing stage. To reduce this information loss super‐resolution technic is used for resizing images. An improved form of dot‐diffused block truncation coding is used for extracting RGB handcraft features. To discover the interdependencies between color and intensity component of an image, interchannel voting between hue, saturation, and intensity component is calculated as a color feature in HSI space. Histogram of orientated gradient (HOG) feature is used as shape feature. Five standard performance parameters, average precision rate, average recall rate, F‐Measure, Average Normalized Modified Retrieval Rank, and Total Minimum Retrieval Epoch, are applied on nine image datasets: Corel‐1K, Corel‐5K, Corel‐10K, VisTex, STex, ColorBrodatz and three subsets of ImageNet dataset for evaluation process of proposed method. For all dataset the best performance is achieved by the proposed method with respect to all performance parameters.

Journal ArticleDOI
TL;DR: A framework based on Lambda architecture for recommendation systems that run on a big data processing platform that proves to be useful and has negligible processing overheads is proposed.
Abstract: The rapid growth in the airline industry, which started in 2009, continued until the COVID‐19 era, with the annual number of passengers almost doubling in 10 years. This situation has led to increased competition between airline companies, whose profitability has decreased considerably. They aimed to increase their profitability by making services like seat selection, excess baggage, Wi‐Fi access optional under the name of ancillary services. To the best of our knowledge, there is no recommendation system for recommending ancillary services for airline companies. Also, to the best of our knowledge, there is no testing framework to compare recommendation algorithms considering their scalabilities and running times. In this paper, we propose a framework based on Lambda architecture for recommendation systems that run on a big data processing platform. The proposed method utilizes association rule and sequential pattern mining algorithms that are designed for big data processing platforms. To facilitate testing of the proposed method, we implement a prototype application. We conduct an experimental study on the prototype to investigate the performance of the proposed methodology using accuracy, scalability, and latency related performance metrics. The results indicate that the proposed method proves to be useful and has negligible processing overheads.

Journal ArticleDOI
TL;DR: This proposed CFTEERP uses the nearest secure node costs to increase the network lifetime without selecting the nearest nodes for routing the data, and provides 90% of PDR and a minimal energy consumption rate of 25% lesser than the existing systems against different malicious attacks.
Abstract: The open communication medium of the Internet of Things (IoT) is more vulnerable to security attacks. As the IoT environment consists of distributed power limited units, the routing protocol used for distributed routing should be light‐weighted compared to other centralized networks. In this situation, complex security algorithms and routing mechanisms affect the generic data communications in IoT platforms. To handle this problem, this proposed system develops a cooperative and feedback‐based trustable energy‐efficient routing protocol (CFTEERP). This protocol calculates local trust value (LTV) and global trust value (GTV) of each node using node attributes and K‐means‐based feedback evaluation procedures. The K‐means clustering algorithm leaves out the distorted node routing metrics and misbehaving node metrics for all channels. This proposed CFTEERP uses the nearest secure node costs to increase the network lifetime without selecting the nearest nodes for routing the data. In this work, secure routing is initiated using multipath routing strategy that analyses LTV, GTV, next trustable node, average throughput, energy consumption, average packet delivery ratio (PDR) and traffic various metrics of entire IoT communication. The technical aspects of proposed system are implemented to solve different existing techniques' limitations. In the comparative experiment, the proposed method provides 90% of PDR and a minimal energy consumption rate of 25% lesser than the existing systems against different malicious attacks.

Journal ArticleDOI
TL;DR: A new deep one‐dimensional Convolutional Neural Network (1D‐CNN) architecture is proposed to increase the detection accuracy and alleviate the workload of experts in the classification of sound signals used in the diagnosis of heart valve diseases.
Abstract: Heart sounds have been widely used for years to monitor and classify heart diseases. Experts manually examine these sounds, which is arduous and time‐consuming. In addition, since interpreting these sounds requires experience, experts who do not have enough experience may misinterpret these sounds. For this reason, a new deep one‐dimensional Convolutional Neural Network (1D‐CNN) architecture has been proposed to increase the detection accuracy and alleviate the workload of experts in the classification of sound signals used in the diagnosis of heart valve diseases. In the developed model, first, feature maps were obtained from heart sounds by using the MFCC method. High performance was achieved when the feature maps obtained later were classified in the developed deep architecture. Furthermore, the feature maps generated by the MFCC approach were classified using traditional machine learning classifiers. When the obtained results were compared, it was observed that the suggested deep model was more successful. In the developed architecture, an accuracy rate of 99.5% was obtained. The accuracy rate obtained shows that the developed architecture can be used to classify heart sounds and diagnose heart valve diseases.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed GLM-based moving average and double moving average (DMA) schemes formed on standardized residuals derived from a fitted Poisson regression model to monitor industrial operations that focus on the normal response variable.
Abstract: Control charts are widely used tool that provides quality inspectors with sensitive information for maintaining manufacturing process productivity. Numerous model‐based techniques have been presented in the literature to monitor industrial operations that focus on the normal response variable. However, non‐normal response results can occur as a result of quality control operations. In such cases, a new approach based on generalized linear model that provides multiple distribution options for response variables is required to achieve better results. Therefore, this study proposes GLM‐based moving average (MA) and double moving average (DMA) schemes formed on standardized residuals derived from a fitted Poisson regression model. The productivity of suggested methods and the existing exponentially weighted moving average (EWMA) scheme is explored in terms of run length attributes. The simulation outcomes revealed that moving average schemes based on standardized residuals (i.e., SR‐MA and SR‐DMA) outperform their predecessor (i.e., SR‐EWMA). Moreover, the SR‐DMA chart, with small values of span w , has proven to be more effective at detecting minor to moderate shifts in the process mean. Finally, a case study of a 3D manufacturing operation is shown to emphasize the importance of the proposed approaches.

Journal ArticleDOI
TL;DR: In this article , Deep Learning Enabled Cross-lingual Search with Metaheuristic based Query Optimization (DLCLS•MQO) model is introduced to generate a summary of several documents in which the summary language is different from the source language.
Abstract: Due to the exponential increase in the generation of digital documents and in the online search user diversity, multilingual information is highly available on the Internet. However, the huge amount of multilingual data cannot be analyzed manually. Therefore, cross lingual multi‐document summarization (CLMDS) model is introduced to generate a summary of several documents in which the summary language is different from the source document language. This paper presents a Deep Learning Enabled Cross‐lingual Search with Metaheuristic based Query Optimization (DLCLS‐MQO) model for Multi‐document summarization. The DLCLS‐MQO model allows to offer a query in Tamil, summarize several English documents, and lastly translate the summary into Tamil. The DLCLS‐MQO model encompasses four stages of operation such as multilingual search, query optimization, automatic sematic lexicon builder, and document summarization. Firstly, bidirectional long short‐term memory (BiLSTM) model is applied to perform multilingual searching process. Followed by, sunflower optimization (SFO) algorithm based query optimization process is carried out. Moreover, global vectors (GloVe) method is used for the construction of domain oriented sentiment lexicons. Finally, extreme gradient boosting (XGBoost) model is applied for the CLMDS. A detailed simulation analysis takes place to highlight the betterment of the DLCLS‐MQO model. The resultant experimental values portrayed the superior performance of the DLCLS‐MQO model over the compared methods.

Journal ArticleDOI
TL;DR: In this paper , a machine learning based malicious signal detection is employed for cognitive radio networks, the design of cognitive radio users and network environment is simulated with Riverbed simulation software, the received signal is controlled whether it is a malicious signal or just a secure sensing signal.
Abstract: In cognitive radio networks, the empty spectrum that is also named as spectrum hole is detected with the help of spectrum sensing techniques. Energy detection is the most utilized spectrum sensing technique owing to its low complexity. In the energy detection technique, a spectrum hole is detected with a predefined threshold. In this article, machine learning based malicious signal detection is employed for cognitive radio networks. The design of cognitive radio users and network environment is simulated with Riverbed simulation software. The received signal is controlled whether it is a malicious signal or just a secure sensing signal. The fuzzy logic based system is utilized for the security categorization of spectrum sensing signals as malicious, suspicious, and secure sensing signals. Fuzzy logic parameters are taken from the machine learning features that are chosen as the most effective 3 features among all 49 features. The security of primary users is enhanced when compared to other schemes in the literature. The results of the proposed machine learning based malicious signal detection system are validated with the results acquired from the fuzzy logic based approach. The random forest method gives the best results among all machine learning methods for the detection of signals.

Journal ArticleDOI
TL;DR: This paper presents the view on the background regarding the implications of explainability applied to recommender systems and contributes to the better understanding of the concept of explainable recommendation.
Abstract: Explainable recommendations become essential when we need to improve the performance of recommendations and to increase user confidence. Explanations are effective when end users can build a complete and correct mental representation of the inferential process of a recommender system. This paper presents our view on the background regarding the implications of explainability applied to recommender systems. Our work contributes to the better understanding of the concept of explainable recommendation and it offers a broader picture of the development of further research in this field. Additionally, we contribute by providing a better understanding of the concept of human‐centered evaluation of explainable recommender systems.

Journal ArticleDOI
TL;DR: This research is intended to automatically identify and distinguish examinees in live videos who cheat by their activities in examination hall during exams by taking the privilege of ensemble learning using deep learning‐based algorithms.
Abstract: The automated behavior analysis is a significant problem in the examination. In the presence of invigilators and examiners, examinees use various unfair means to cheat the staff. Examinees may use different gestures like moving their head, eyes and using different suspicious objects (mobile phone, calculator) and so forth. This research is intended to automatically identify and distinguish examinees in live videos who cheat by their activities in examination hall during exams. Hence, we took the privilege of ensemble learning using deep learning‐based algorithms. The suspicious head movements and prohibited objects have been monitored using fine‐tuned Faster RCNN algorithm. At the same time, interactive use‐age of prohibited objects find out using Open‐Pose architecture. For this purpose, intersection over union (IOU) has been calculated between the region of interest (ROI) of detected objects and pose points of hands and face. To get the identity of a specific examinee, we stored the features of facial ROI after the last convolution layer and matched them at the time of testing for statistical analysis. We achieved state‐of‐the‐art results using standard evaluating measures. Moreover, for experiments, dataset of automatic examinee invigilation is available currently. So, this research also contributes toward the generation and annotation of the dataset.

Journal ArticleDOI
TL;DR: In this article , a hybrid ML intrusion detection system (IDS) model is proposed, which employs a 10-fold cross-validation technique to perform feature selection, reducing data dimensions on the publicly available benchmark NSL-KDD dataset.
Abstract: Cloud computing offers comfortable service to business sectors as they can concentrate on their products. Over the internet, cloud computing is liable to various security threats and attacks which is a primary obstacle to the growth of cloud computing services. Distributed denial of service (DDoS) is one such attack that exploits cloud computing services using compromised machines; hence, its detection is a significant field of research. Several DDoS detection schemes have been proposed in the past, but they fail to detect real‐time active DDoS attacks because of their growth in severity and volume. Machine learning (ML) techniques are efficient in making predictions; hence, in this study, a hybrid ML intrusion detection system (IDS) model is proposed. The performance of the proposed IDS model is improved by employing a 10‐fold cross‐validation technique to perform feature selection, reducing data dimensions on the publicly available benchmark NSL‐KDD dataset. Performance validation of the proposed hybrid IDS model is done using the confusion matrix. Support vector machine (SVM) parameters are fine‐tuned using hybrid Harris Hawks optimization (HHO) and particle swarm optimization (PSO) algorithms. The performance of these hybrid algorithms is compared with other classical algorithms such as C4.5, K‐nearest neighbor, and SVM using performance metrics such as precision, sensitivity, specificity, F1 score, and accuracy. From these comparisons, it can be inferred that the proposed SVM with hybrid optimization HHO‐PSO machine learning IDS model performs better DDoS detection with good performance metric values.

Journal ArticleDOI
TL;DR: AMD‐CNN, an Android malware detection tool, is proposed, and it uses graphical representations to detect malicious apks and has advantages over previous studies.
Abstract: Android malware has become a serious threat to mobile device users, and effective detection and defence architectures are needed to solve this problem. Recently, machine learning techniques have been widely used to deal with Android malicious apps. These methods are based on a simple feature set and have difficulty detecting up‐to‐date malware. Therefore, more robust and efficient classification methodologies are needed. In this article, AMD‐CNN, an Android malware detection tool, is proposed, and it uses graphical representations to detect malicious apks. In the first step, the features related to the androidmanifest.xml file are extracted and converted into a vector consisting of one or zero. The feature vector is then converted to 2D‐code images and used in training the CNN network. The model needs low‐resource consumption to run on mobile devices and allow real‐time applications to be analyzed. The experiments with 1920 malicious and benign apks show that the malware detection rate (accuracy) was 96.2% and precision, recall, and F‐score values were 97.9%, 98.2%, and 98.1%, respectively. The average time and memory space to analyze each application are 0.035 s and 3.38 MB. AMD‐CNN is an efficient and robust tool and has advantages over previous studies.

Journal ArticleDOI
TL;DR: In this paper , a modified Gaussian convolutional deep belief network based dwarf mongoose optimization algorithm was proposed for effective extraction and classification of retinal images, which achieved 97% accuracy.
Abstract: In general, diabetic retinopathy (DR) is a common ocular disease that causes damage to the retina due to blood leakage from the vessels. Earlier detection of DR becomes a complicated task and it is necessary to prevent complete blindness. Various physical examinations are employed in DR detection but manual diagnosis results in misclassification results. Therefore, this article proposes a novel technique to predict and classify the DR disease effectively. The significant objective of the proposed approach involves the effective classification of fundus retinal images into two namely, normal (absence of DR) and abnormal (presence of DR). The proposed DR detection utilizes three vital phases namely, the data preprocessing, image augmentation, feature extraction, and classification. Initially, the image preprocessing is done to remove unwanted noises and to enhance images. Then, the preprocessed image is augmented to enhance the size and quality of the training images. This article proposes a novel modified Gaussian convolutional deep belief network based dwarf mongoose optimization algorithm for effective extraction and classification of retinal images. In this article, an ODIR‐2019 dataset is employed in detecting and classifying DR disease. Finally, the experimentation is carried out and the proposed approach achieved 97% of accuracy. This implies that our proposed approach effectively classifies the fundus retinal images.

Journal ArticleDOI
TL;DR: A computer vision based system to classify seven different categories of diseases, namely, bacterial spot, early blight, late blight, leaf mold, septoria leaf spot, spider mites, and target spots using optimized MobileNetV2 architecture is reported.
Abstract: Tomato is a widely consumed fruit across the world due to its high nutritional values. Leaf diseases in tomato are very common which incurs huge damages but early detection of leaf diseases can help in avoiding that. The existing practices for detecting different diseases by the human experts are costly, time consuming and subjective in nature. Computer vision plays important role toward early detection of tomato leaf detection. However, implementation of computationally less expensive model and improvement of detection performance is still open. This article reports a computer vision based system to classify seven different categories of diseases, namely, bacterial spot, early blight, late blight, leaf mold, septoria leaf spot, spider mites, and target spots using optimized MobileNetV2 architecture. A modified gray wolf optimization approach has been adopted for optimization of MobileNetV2 hyperparameters for improved performance. The model has been validated using standard internal and external validation methods and found to provide the classification accuracy in the tune of 98%. The results reflect the promising potential of the presented framework for early detection of tomato leaf diseases which can help to avoid substantial agricultural loss.

Journal ArticleDOI
TL;DR: A fast chaos‐RSA‐based hybrid cryptosystem to secure and authenticate secret images and a comparative study against numerous recent encryption algorithms demonstrates that the proposed algorithm provides good results.
Abstract: This article puts forward a fast chaos‐RSA‐based hybrid cryptosystem to secure and authenticate secret images. The SHA‐512 is used to generate a 512‐bit initial key. The RSA system is used to encrypt the initial secret key and signature generation for both the sender and image authentication. In fact, a powerful block‐cipher algorithm is developed to encrypt and decrypt images with a high level of security. At this stage, a strong PRNG based on four chaotic systems is propounded to generate high‐quality keys. Therefore, an improved architecture is suggested. It performs confusion and diffusion of images with low computational complexity. In the final step, the encrypted secret key, signature, and encrypted image are combined together in order to obtain an encrypted signed image. The block‐cipher algorithm is evaluated in‐depth for several ordinary and medical images with different types, content, and size. The obtained simulation results demonstrate that the system enables high‐level security. The entropy has achieved a value of 7.9998 which is the most important feature of randomness. A comparative study against numerous recent encryption algorithms demonstrates that the proposed algorithm provides good results.