scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Machine Learning and Cybernetics in 2021"


Journal ArticleDOI
TL;DR: In this algorithm, a dynamic archive concept, grid mechanism, leader selection, and genetic operators are employed with the capability to cache the solutions from the non-dominated Pareto to find the appropriate archived solutions.
Abstract: This study introduces the evolutionary multi-objective version of seagull optimization algorithm (SOA), entitled Evolutionary Multi-objective Seagull Optimization Algorithm (EMoSOA). In this algorithm, a dynamic archive concept, grid mechanism, leader selection, and genetic operators are employed with the capability to cache the solutions from the non-dominated Pareto. The roulette-wheel method is employed to find the appropriate archived solutions. The proposed algorithm is tested and compared with state-of-the-art metaheuristic algorithms over twenty-four standard benchmark test functions. Four real-world engineering design problems are validated using proposed EMoSOA algorithm to determine its adequacy. The findings of empirical research indicate that the proposed algorithm is better than other algorithms. It also takes into account those optimal solutions from the Pareto which shows high convergence.

90 citations


Journal ArticleDOI
TL;DR: The background of the 5G wireless networks is described and a deep insight is given into a set of 5G challenges and research opportunities for machine learning (ML) techniques to manage these challenges.
Abstract: 5G cellular networks are expected to be the key infrastructure to deliver the emerging services. These services bring new requirements and challenges that obstruct the desired goal of forthcoming networks. Mobile operators are rethinking their network design to provide more flexible, dynamic, cost-effective and intelligent solutions. This paper starts with describing the background of the 5G wireless networks then we give a deep insight into a set of 5G challenges and research opportunities for machine learning (ML) techniques to manage these challenges. The first part of the paper is devoted to overview the fifth-generation of cellular networks, explaining its requirements as well as its key technologies, their challenges and its forthcoming architecture. The second part is devoted to present a basic overview of ML techniques that are nowadays applied to cellular networks. The last part discusses the most important related works which propose ML solutions in order to overcome 5G challenges.

89 citations


Journal ArticleDOI
TL;DR: In this article, a logistic regression method enhanced by the concept of supervised machine learning (logitboost) was used for developing a classification model using 13 network traffic features generated by IoT devices.
Abstract: The emergence of the Internet of Things (IoT) concept as a new direction of technological development raises new problems such as valid and timely identification of such devices, security vulnerabilities that can be exploited for malicious activities, and management of such devices. The communication of IoT devices generates traffic that has specific features and differences with respect to conventional devices. This research seeks to analyze the possibilities of applying such features for classifying devices, regardless of their functionality or purpose. This kind of classification is necessary for a dynamic and heterogeneous environment, such as a smart home where the number and types of devices grow daily. This research uses a total of 41 IoT devices. The logistic regression method enhanced by the concept of supervised machine learning (logitboost) was used for developing a classification model. Multiclass classification model was developed using 13 network traffic features generated by IoT devices. Research has shown that it is possible to classify devices into four previously defined classes with high performances and accuracy (99.79%) based on the traffic flow features of such devices. Model performance measures such as precision, F-measure, True Positive Ratio, False Positive Ratio and Kappa coefficient all show high results (0.997–0.999, 0.997–0.999, 0.997–0.999, 0–0.001 and 0.9973, respectively). Such a developed model can have its application as a foundation for monitoring and managing solutions of large and heterogeneous IoT environments such as Industrial IoT, smart home, and similar.

87 citations


Journal ArticleDOI
TL;DR: In this paper, a depthwise separable convolution neural network (DWS-CNN) with deep support vector machine (DSVM) was proposed to detect both binary and multiple classes of COVID-19 by incorporating a set of processes namely data acquisition, Gaussian filtering (GF) based preprocessing, feature extraction and classification.
Abstract: At present times, the drastic advancements in the 5G cellular and internet of things (IoT) technologies find useful in different applications of the healthcare sector. At the same time, COVID-19 is commonly spread from animals to persons, but today it is transmitting among persons by adapting the structure. It is a severe virus and inappropriately resulted in a global pandemic. Radiologists utilize X-ray or computed tomography (CT) images to diagnose COVID-19 disease. It is essential to identify and classify the disease through the use of image processing techniques. So, a new intelligent disease diagnosis model is in need to identify the COVID-19. In this view, this paper presents a novel IoT enabled Depthwise separable convolution neural network (DWS-CNN) with Deep support vector machine (DSVM) for COVID-19 diagnosis and classification. The proposed DWS-CNN model aims to detect both binary and multiple classes of COVID-19 by incorporating a set of processes namely data acquisition, Gaussian filtering (GF) based preprocessing, feature extraction, and classification. Initially, patient data will be collected in the data acquisition stage using IoT devices and sent to the cloud server. Besides, the GF technique is applied to remove the existence of noise that exists in the image. Then, the DWS-CNN model is employed for replacing default convolution for automatic feature extraction. Finally, the DSVM model is applied to determine the binary and multiple class labels of COVID-19. The diagnostic outcome of the DWS-CNN model is tested against Chest X-ray (CXR) image dataset and the results are investigated interms of distinct performance measures. The experimental results ensured the superior results of the DWS-CNN model by attaining maximum classification performance with the accuracy of 98.54% and 99.06% on binary and multiclass respectively.

70 citations


Journal ArticleDOI
TL;DR: A novel randomized particle swarm optimizer (RPSO) is proposed where the Gaussian white noise with adjustable intensity is utilized to randomly perturb the acceleration coefficients in order for the problem space to be explored more thoroughly.
Abstract: The particle swarm optimization (PSO) algorithm is a popular evolutionary computation approach that has received an ever-increasing interest in the past decade owing to its wide application potential. Despite the many variants of the PSO algorithm with improved search ability by means of both the convergence rate and the population diversity, the local optima problem remains a major obstacle that hinders the global optima from being found. In this paper, a novel randomized particle swarm optimizer (RPSO) is proposed where the Gaussian white noise with adjustable intensity is utilized to randomly perturb the acceleration coefficients in order for the problem space to be explored more thoroughly. With this new strategy, the RPSO algorithm not only maintains the population diversity but also enhances the possibility of escaping the local optima trap. Experimental results demonstrate that the proposed RPSO algorithm outperforms some existing popular variants of PSO algorithms on a series of widely used optimization benchmark functions.

63 citations


Journal ArticleDOI
TL;DR: An improved version of H HO is proposed which enhances the performance of HHO by combining HHO with opposition-based learning (OBL), Chaotic Local Search (CLS), and a self-adaptive technique and the numerical results and analysis show the superiority of IHHO in solving real-world problems.
Abstract: Harris Hawks Optimization is a recently proposed algorithm inspired by the cooperative manner and chasing behavior of harris. However, from the experimental results, it can be noticed that HHO may fall in local optima or have a slow convergence curve in some complex optimization tasks. In this paper, an improved version of HHO called IHHO is proposed which enhances the performance of HHO by combining HHO with opposition-based learning (OBL), Chaotic Local Search (CLS), and a self-adaptive technique. In order to show the performance of the proposed algorithm, several experiments are conducted using the Standard IEEE CEC 2017 benchmark. IHHO is compared with the classical HHO and other 10 state-of-art algorithms. Moreover, IHHO is used to solve 5 constrained engineering problems. IHHO has also been applied to solve feature selection problem using 7 UCI dataset. The numerical results and analysis show the superiority of IHHO in solving real-world problems.

63 citations


Journal ArticleDOI
TL;DR: The intuitionistic fuzzy CPT-TodIM (IF-CPT-TODIM) method is proposed for MAGDM issue and it is concluded that this improved approach is acceptable.
Abstract: The stock investment selection could be regarded as a classical multiple attribute group decision making (MAGDM) issue. The intuitionistic fuzzy sets (IFSs) can fully describes the uncertain information for stock investment selection. Furthermore, the classical TODIM method based on the cumulative prospect theory (CPT-TODIM) is built, which is a selectable method in reflecting the DMs’ psychological behavior. Thus, in this paper, the intuitionistic fuzzy CPT-TODIM (IF-CPT-TODIM) method is proposed for MAGDM issue. At the same time, it is enhancing rationality to get the weight information of attributes by using the CRITIC method under IFSs. And focusing on hot issues in contemporary society, this article applies the discussed method for stock investment selection and demonstrates for stock investment selection based on the proposed method. Finally, through comparing the outcome of comparative analysis, we conclude that this improved approach is acceptable.

51 citations


Journal ArticleDOI
TL;DR: This research mainly focuses on presenting an innovative study of a multi-stage multi-objective fixed-charge solid transportation problem with a green supply chain network system under an intuitionistic fuzzy environment and incorporates an application example connected with a real-life industrial problem to display the feasibility and potentiality of the proposed model.
Abstract: This research mainly focuses on presenting an innovative study of a multi-stage multi-objective fixed-charge solid transportation problem (MMFSTP) with a green supply chain network system under an intuitionistic fuzzy environment. The most controversial issue in recent years is that greenhouse gas emissions such as carbon dioxide, methane, etc. induce air pollution and global warming, thus motivating us to formulate the proposed research. In real-world situations the parameters of MMFSTP via a green supply chain network system usually have unknown quantities, and thus we assume trapezoidal intuitionistic fuzzy numbers to accommodate them and then employ the expected value operator to convert intuitionistic fuzzy MMFSTP into deterministic MMFSTP. Next, the methodologies are constructed to solve the deterministic MMFSTP by weighted Tchebycheff metrics programming and min-max goal programming, which provide Pareto-optimal solutions. A comparison is then drawn between the Pareto-optimal solutions that are extracted from the programming, and thereafter a procedure is performed to analyze the sensitivity analysis of the target values in the min–max goal programming. Finally, we incorporate an application example connected with a real-life industrial problem to display the feasibility and potentiality of the proposed model. Conclusions about the findings and future study directions are also offered.

50 citations


Journal ArticleDOI
TL;DR: The fully cooperative multi-agent reinforcement learning (MARL) uses a kinematic learning to avoid function approximators and large learning space and the experimental results show that the MARL is much more better compared with the classic methods such as Jacobian-based methods and neural networks.
Abstract: Task-space control needs the inverse kinematics solution or Jacobian matrix for the transformation from task space to joint space. However, they are not always available for redundant robots because there are more joint degrees-of-freedom than Cartesian degrees-of-freedom. Intelligent learning methods, such as neural networks (NN) and reinforcement learning (RL) can learn the inverse kinematics solution. However, NN needs big data and classical RL is not suitable for multi-link robots controlled in task space. In this paper, we propose a fully cooperative multi-agent reinforcement learning (MARL) to solve the kinematic problem of redundant robots. Each joint of the robot is regarded as one agent. The fully cooperative MARL uses a kinematic learning to avoid function approximators and large learning space. The convergence property of the proposed MARL is analyzed. The experimental results show that our MARL is much more better compared with the classic methods such as Jacobian-based methods and neural networks.

47 citations


Journal ArticleDOI
TL;DR: In this paper, an attention-based context aggregation network (ACAN) is proposed to adaptively learn the task-specific similarities between different pixels to model the continuous context information.
Abstract: Depth estimation is a traditional computer vision task, which plays a crucial role in understanding 3D scene geometry. Recently, algorithms that combine the multi-scale features extracted by the dilated convolution based block (atrous spatial pyramid pooling, ASPP) have gained significant improvements in depth estimation. However, the discretized and predefined dilation kernels cannot capture the continuous context information that differs in diverse scenes and easily introduce the grid artifacts. This paper proposes a novel algorithm, called attention-based context aggregation network (ACAN) for depth estimation. A supervised self-attention model is designed and utilized to adaptively learn the task-specific similarities between different pixels to model the continuous context information. Moreover, a soft ordinal inference is proposed to transform the predicted probabilities to continuous depth values which reduce the discretization error (about 1% decrease in RMSE). ACAN achieves state-of-the-art performance on public monocular depth-estimation benchmark datasets. The source code of ACAN can be found in https://github.com/miraiaroha/ACAN .

41 citations


Journal ArticleDOI
TL;DR: In this paper, the q-rung orthopair fuzzy linguistic family of point aggregation operators was proposed for linguistic Q-ROFSs, and a novel multi attribute group decision-making (MAGDM) methodology was designed to process the linguistic q-Rung Orthopair Fuzzy information.
Abstract: The q-rung orthopair fuzzy sets (q-ROFSs), originally proposed by Yager, can express uncertain data to give decision-makers more space. The q-ROFS is a useful tool for describing imprecision, ambiguity, and inaccuracy, and the point operator is a useful aggregation operator which can manage the uncertainty and thus obtain intensive information within the decision-making process. In the latest realization, the linguistic q-rung orthopair fuzzy number (Lq-ROFN) is suggested where the linguistic variables are expressed as membership and non-membership of the Lq-ROFN. In this article, we propose the q-rung orthopair fuzzy linguistic family of point aggregation operators for linguistic q-rung orthopair fuzzy sets (Lq-ROFSs). Firstly, with the arithmetic and geometric operators, we introduce a new class of point-weighted aggregation operators to aggregate linguistic q-rung orthopair fuzzy information such as linguistic q-rung orthopair fuzzy point weighted averaging (Lq-ROFPWA) operators, linguistic q-rung orthopair fuzzy point weighted geometric (Lq-ROFPWG) operators, linguistic q-rung orthopair fuzzy generalized point weighted averaging (Lq-ROFGPWA) operators and linguistic q-rung orthopair fuzzy generalized point weighted geometric (Lq-ROFGPWG) operators. Then, we discuss some special cases and study the properties of these proposed operators. Based on Lq-ROFPWA and Lq-ROFPWG operators, a novel multi attribute group decision-making (MAGDM) methodology is designed to process the linguistic q-rung orthopair fuzzy information. Finally, we provide an example to demonstrate the applicability of the MAGDM. Consequently, the outstanding superiority of the developed methodology is assisted in a variety of ways by parameter exploration and thorough comparative analysis.

Journal ArticleDOI
TL;DR: Experimental results reveal that transfer learning improves the overall performance, detection accuracy, and reduces false positives of the detection model.
Abstract: Nowadays, 5G profoundly impacts video surveillance and monitoring services by processing video streams at high-speed with high-reliability, high bandwidth, and secure network connectivity. It also enhances artificial intelligence, machine learning, and deep learning techniques, which require intense processing to deliver near-real-time solutions. In video surveillance, person tracking is a crucial task due to the deformable nature of the human body, various environmental components such as occlusion, illumination, and background conditions, specifically, from a top view perspective where the person’s visual appearance is significantly different from a frontal or side view. In this work, multiple people tracking framework is presented, which uses 5G infrastructure. A top view perspective is used, which offers broad coverage of the scene or field of view. To perform a person tracking deep learning-based tracking by detection framework is proposed, which includes detection by YOLOv3 and tracking by Deep SORT algorithm. Although the model is pre-trained using the frontal view images, even then, it gives good detection results. In order to further enhance the accuracy of the detection model, the transfer learning approach is adopted. In this way, a detection model takes advantage of a pre-trained model appended with an additional trained layer using top view data set. To evaluate the performance, experiments are carried out on different top view video sequences. Experimental results reveal that transfer learning improves the overall performance, detection accuracy, and reduces false positives. The deep learning detection model YOLOv3 achieves detection accuracy of 92% with a pre-trained model without transfer learning and 95% with transfer learning. The tracking algorithm Deep SORT also achieves excellent results with a tracking accuracy of 96%.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed RODDPSO-based DBN outperforms the standard DBN and the modified DBN in terms of the classification accuracy.
Abstract: In this paper, a deep belief network (DBN) is employed to deal with the problem of the patient attendance disposal in accident & emergency (A&E) departments. The selection of the hyperparameters of the employed DBN is automated by using the particle swarm optimization (PSO) algorithm that is known for its simplicity, easy implementation and relatively fast convergence rate to a satisfactory solution. Specifically, a recently developed randomly occurring distributedly delayed PSO (RODDPSO) algorithm, which is capable of seeking the optimal solution and alleviating the premature convergence, is exploited with aim to optimize the hyperparameters of the DBN. The developed RODDPSO-based DBN is successfully applied to analyze the A&E data for classifying the patient attendance disposal in the A&E department of a hospital in west London. Experimental results show that the proposed RODDPSO-based DBN outperforms the standard DBN and the modified DBN in terms of the classification accuracy.

Journal ArticleDOI
TL;DR: A comprehensive overview of adversarial attacks and defenses in the real physical world can be found in this article, where the authors reviewed the works that can successfully generate adversarial examples in the digital world and analyzed the challenges faced by applications in real environments.
Abstract: Deep learning technology has become an important branch of artificial intelligence. However, researchers found that deep neural networks, as the core algorithm of deep learning technology, are vulnerable to adversarial examples. The adversarial examples are some special input examples which were added small magnitude and carefully crafted perturbations to yield erroneous results with extremely confidence. Hence, they bring serious security risks to deep-learning-based systems. Furthermore, adversarial examples exist not only in the digital world, but also in the physical world. This paper presents a comprehensive overview of adversarial attacks and defenses in the real physical world. First, we reviewed these works that can successfully generate adversarial examples in the digital world, analyzed the challenges faced by applications in real environments. Then, we compare and summarize the work of adversarial examples on image classification tasks, target detection tasks, and speech recognition tasks. In addition, the relevant feasible defense strategies are summarized. Finally, relying on the reviewed work, we propose potential research directions for the attack and defense of adversarial examples in the physical world.

Journal ArticleDOI
TL;DR: The proposed method, EFS-MCDM, first obtains a decision matrix using the ranks of every feature according to various rankers, and the VIKOR approach is used to assign a score to each feature based on the decision matrix.
Abstract: For the first time, the ensemble feature selection is modeled as a Multi-Criteria Decision-Making (MCDM) process in this paper. For this purpose, we used the VIKOR method as a famous MCDM algorithm to rank the features based on the evaluation of several feature selection methods as different decision-making criteria. Our proposed method, EFS-MCDM, first obtains a decision matrix using the ranks of every feature according to various rankers. The VIKOR approach is then used to assign a score to each feature based on the decision matrix. Finally, a rank vector for the features generates as an output in which the user can select a desired number of features. We have compared our approach with some ensemble feature selection methods using feature ranking strategy and basic feature selection algorithms to illustrate the proposed method's optimality and efficiency. The results show that our approach in terms of accuracy, F-score, and algorithm run-time is superior to other similar methods and performs in a short time, and it is more efficient than the other methods.

Journal ArticleDOI
TL;DR: This work designs an improved VGG convolutional neural network and has significantly superior performance compared with existing schemes and significantly improves recognition accuracy while maintaining good real-time performance.
Abstract: The rapid development and application of AI in intelligent transportation systems has widely impacted daily life The application of an intelligent visual aid for traffic sign information recognition can provide assistance and even control vehicles to ensure safe driving The field of autonomous driving is booming, and great progress has been made Many traffic sign recognition algorithms based on convolutional neural networks (CNNs) have been proposed because of the fast execution and high recognition rate of CNNs However, this work addresses a challenging question in the autonomous driving field: how can traffic signs be recognized in real time and accurately? The proposed method designs an improved VGG convolutional neural network and has significantly superior performance compared with existing schemes First, some redundant convolutional layers are removed efficiently from the VGG-16 network, and the number of parameters is greatly reduced to further optimize the overall architecture and accelerate calculation Furthermore, the BN (batch normalization) layer and GAP (global average pooling) layer are added to the network to improve the accuracy without increasing the number of parameters The proposed method needs only 115 M when using the improved VGG-16 network Finally, extensive experiments on the German Traffic Sign Recognition Benchmark (GTSRB) Dataset are performed to evaluate our proposed scheme Compared with traditional methods, our scheme significantly improves recognition accuracy while maintaining good real-time performance

Journal ArticleDOI
TL;DR: A novel parallel membrane-inspired framework is proposed to enhance the performance of the krill herd algorithm combined with the swap mutation strategy (MHKHA) and the results revealed that the proposed MHKHA produced superior results compared to other optimization methods.
Abstract: In this paper, a novel feature selection method is introduced to tackle the problem of high-dimensional features in the text clustering application. Text clustering is a prevailing direction in big text mining; in this manner, documents are grouped into cohesive groups by using neatly selected informative features. Swarm-based optimization techniques have been widely used to select the relevant text features and shown promising results on multi-sized datasets. The performance of traditional optimization algorithms tends to fail miserably when using large-scale datasets. A novel parallel membrane-inspired framework is proposed to enhance the performance of the krill herd algorithm combined with the swap mutation strategy (MHKHA). In which the krill herd algorithm is hybridized the swap mutation strategy and incorporated within the parallel membrane framework. Finally, the k-means technique is employed based on the results of feature selection-based Krill Herd Algorithm to cluster the documents. Seven benchmark datasets of various characterizations are used. The results revealed that the proposed MHKHA produced superior results compared to other optimization methods. This paper presents an alternative method for the text mining community through cohesive and informative features.

Journal ArticleDOI
TL;DR: A new NMF clustering method with manifold regularization for multi-view data that achieves better clustering performance than some state-of-the-art algorithms.
Abstract: Nowadays, non-negative matrix factorization (NMF) based cluster analysis for multi-view data shows impressive behavior in machine learning. Usually, multi-view data have complementary information from various views. The main concern behind the NMF is how to factorize the data to achieve a significant clustering solution from these complementary views. However, NMF does not focus to conserve the geometrical structures of the data space. In this article, we intensify on the above issue and evolve a new NMF clustering method with manifold regularization for multi-view data. The manifold regularization factor is exploited to retain the locally geometrical structure of the data space and gives extensively common clustering solution from multiple views. The weight control term is adopted to handle the distribution of each view weight. An iterative optimization strategy depended on multiplicative update rule is applied on the objective function to achieve optimization. Experimental analysis on the real-world datasets are exhibited that the proposed approach achieves better clustering performance than some state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: For the first time, the problem of multi-label feature selection to a bipartite graph matching process is modeled and the results indicate the superiority of the proposed method against the other methods in classification measures.
Abstract: Many real-world data have multiple class labels known as multi-label data, where the labels are correlated with each other, and as such, they are not independent. Since these data are usually high-dimensional, and the current multi-label feature selection methods have not been precise enough, then a new feature selection method is necessary. In this paper, for the first time, we have modeled the problem of multi-label feature selection to a bipartite graph matching process. The proposed method constructs a bipartite graph of features (as the left vertices) and labels (as the right vertices), called Feature-Label Graph (FLG), where each feature is connected to the set of labels, where the weight of the edge between each feature and label is equal to their correlation. Then, the Hungarian algorithm estimates the best matching in FLG. The selected features in each matching are sorted by weighted correlation distance and added to the ranking vector. To select the discriminative features, the proposed method considers both the redundancy of features and the relevancy of each feature to the class labels. The results indicate the superiority of the proposed method against the other methods in classification measures.

Journal ArticleDOI
TL;DR: A new multi-View low rank sparse representation method based on three-way clustering to tackle challenges of dimensionality reduction and learning discriminative features from multi-view data and further proceed to get the relationship between the data items and clusters.
Abstract: During the past years, multi-view clustering algorithms have demonstrated satisfactory clustering results by fusing the multiple views of the dataset. Nowadays, the researches of dimensionality reduction and learning discriminative features from multi-view data have soared in the literatures. As for clustering, generating the suitable subspace of the high dimensional multi-view data is crucial to boost the clustering performance. In addition, the relationship between the original data and the clusters still remains uncovered. In this article, we design a new multi-view low rank sparse representation method based on three-way clustering to tackle these challenges, which derive the common consensus low dimensional representation from the multi-view data and further proceed to get the relationship between the data items and clusters. Specifically, we accomplish this goal by taking advantage of the low-rank and the sparse factor on the data representation matrix. The $$L_{2,1}$$ norm is imposed on error matrix to reduce the impact of noise contained in the data. Finally, a new objective function is constructed to preserve the consistency between the views by using the low-rank sparse representation technique. The weighted low-rank matrix is utilized to build the consensus low rank matrix. Then, the whole objective function is optimized by using the Augmented Lagrange’s Multiplier algorithm. Further, to find the uncertain relationship between the data items and the clusters, we pursue the neighborhood based three-way clustering technique to reflect the data items into core and fringe regions. Experiments conducted on the real-world datasets show the superior performance of the proposed method compared with the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a clustering algorithm based on k-nearest neighbors (kNN) and self-recommendation, which is called DPC-MC for short.
Abstract: Density peaks clustering (DPC) model focuses on searching density peaks and clustering data with arbitrary shapes for machine learning. However, it is difficult for DPC to select a cut-off distance in the calculation of a local density of points, and DPC easily ignores the cluster centers with lower density in datasets with variable densities. In addition, for clusters with complex shapes, DPC selects only one cluster center for a cluster, meaning that the structure of the whole cluster is not fully reflected. To overcome these drawbacks, this paper presents a novel DPC model that merges microclusters based on k-nearest neighbors (kNN) and self-recommendation, called DPC-MC for short. First, the kNN-based neighbourhood of point is defined and the mutual neighbour degree of point is presented in this neighbourhood, and then a new local density based on the mutual neighbour degree is proposed. This local density does not need to set the cut-off distance manually. Second, to address the artificial setting of cluster centers, a self-recommendation strategy for local centers is provided. Third, after the selection of multiple local centers, the binding degree of microclusters is developed to quantify the combination degree between a microcluster and its neighbour clusters. After that, homogeneous clusters are found according to the binding degree of microclusters during the process of deleting boundary points layer by layer. The homologous clusters are merged, the points in the abnormal clusters are reallocated, and then the clustering process ends. Finally, the DPC-MC algorithm is designed, and nine synthetic datasets and twenty-seven real-world datasets are used to verify the effectiveness of our algorithm. The experimental results demonstrate that the presented algorithm outperforms other compared algorithms in terms of several evaluation metrics for clustering.

Journal ArticleDOI
TL;DR: Empirical results reveals that the model build by considering rough set analysis as a feature selection approach, and farthest first as a machine learning algorithm achieved the highest detection rate of 98.8% to detect malware from real-world apps.
Abstract: With the exponential growth in Android apps, Android based devices are becoming victims of target attackers in the “silent battle” of cybernetics. To protect Android based devices from malware has become more complex and crucial for academicians and researchers. The main vulnerability lies in the underlying permission model of Android apps. Android apps demand permission or permission sets at the time of their installation. In this study, we consider permission and API calls as features that help in developing a model for malware detection. To select appropriate features or feature sets from thirty different categories of Android apps, we implemented ten distinct feature selection approaches. With the help of selected feature sets we developed distinct models by using five different unsupervised machine learning algorithms. We conduct an experiment on 5,00,000 distinct Android apps which belongs to thirty distinct categories. Empirical results reveals that the model build by considering rough set analysis as a feature selection approach, and farthest first as a machine learning algorithm achieved the highest detection rate of 98.8% to detect malware from real-world apps.

Journal ArticleDOI
TL;DR: An innovative decision algorithm is propound which takes the prioritized relations and correlation of the ascertained attributes into account based upon the generalized picture fuzzy archimedean copula prioritized operators and a novel score function to resolve MADM problems.
Abstract: Emergency schemes assessment (ESA) is a momentous activity for the country or government to improve emergency management, which can effectively reduce casualties and economic losses as much as possible. The choice of emergency scheme involves many quantitative or qualitative attributes, thus it can be viewed as a complicated multiple attribute decision making (MADM) issue. Whereas the business and unpredictability characteristics of emergency events, the nondeterminacy, ambiguity and impreciseness always arise in ESA. Picture fuzzy set is deemed as an efficacious technique to seize the ambiguity and indeterminacy of preference information. Because the extant picture fuzzy aggregation operators cannot ponder the priority and relevance of attribute in coping with decision issues. Hence, the goal of this essay is to propound an innovative decision algorithm which takes the prioritized relations and correlation of the ascertained attributes into account based upon the generalized picture fuzzy archimedean copula prioritized operators and a novel score function. Firstly, we develope an innovate score function to more reasonable compare the picture fuzzy numbers. Then, by synthesizing the picture fuzzy number, archimedean copula and prioritized operator, we design the picture fuzzy Archimedean copula prioritized weighted averaging operator, picture fuzzy Archimedean copula prioritized weighted geometric operator and their ordered weighted form to fuse picture fuzzy assessment data and study several remarkable properties, particular cases of these operators. Moreover, we design a novel decision methodology on the basis of the proffered generalized operators and score function to resolve MADM problems. Furthermore, we employ it to dispose of the problem of assessing emergency management schemes in a real-life situation, in which the evaluation information provided via specialists in the form of voting. Ultimately, the outstanding superiority and efficiency of the designed method is justified through the aforementioned numerical and detailed comparative analysis.

Journal ArticleDOI
TL;DR: A new IoT-enabled Optimal Deep Learning based Convolutional Neural Network (ODL-CNN) for FSS to assist in suspect identification process is proposed and a comprehensive qualitative and quantitative examination was conducted to assess the effectiveness.
Abstract: The rapid development in 5G cellular and IoT technologies is expected to be deployed widespread in the next few years. At the same time, crime rates are also increasing to a greater extent while the investigation officers are held responsible to deal with a broad range of cyber and internet issues in investigations. Therefore, advanced IT technologies and IoT devices can be deployed to ease the investigation process, especially, the identification of suspects. At present, only a few research works has been conducted upon deep learning-based Face Sketch Synthesis (FSS) models, concerning its success in diverse application domains including conventional face recognition. This paper proposes a new IoT-enabled Optimal Deep Learning based Convolutional Neural Network (ODL-CNN) for FSS to assist in suspect identification process. The hyper parameter optimization of the DL-CNN model was performed using Improved Elephant Herd Optimization (IEHO) algorithm. In the beginning, the proposed method captures the surveillance videos using IoT-based cameras which are then fed into the proposed ODL-CNN model. The proposed method initially involves preprocessing in which the contrast enhancement process is carried out using Gamma correction method. Then, the ODL-CNN model draws the sketches of the input images following which it undergoes similarity assessment, with professional sketch being drawn as per the directions from eyewitnesses. When the similarity between both the sketches are high, the suspect gets identified. A comprehensive qualitative and quantitative examination was conducted to assess the effectiveness of the presented ODL-CNN model. A detailed simulation analysis pointed out the effective performance of ODL-CNN model with maximum average Peak Signal to Noise Ratio (PSNR) of 20.11dB, Average Structural Similarity (SSIM) of 0.64 and average accuracy of 90.10%.

Journal ArticleDOI
Qiang Gao1, Yi Yang1, Qiaoju Kang1, Zekun Tian1, Yu Song1 
TL;DR: A novel multi-feature fusion network is proposed, which consists of spatial and temporal neural network structures to learn discriminative spatio-temporal emotional information to recognize emotion.
Abstract: With the rapid development of Human-computer interaction, automatic emotion recognition based on multichannel electroencephalography (EEG) signals has attracted much attention in recent years. However, many existing studies on EEG-based emotion recognition ignore the correlation information between different EEG channels and cannot fully capture the contextual information of EEG signals. In this paper, a novel multi-feature fusion network is proposed, which consists of spatial and temporal neural network structures to learn discriminative spatio-temporal emotional information to recognize emotion. In this experiment, two common types of features, time-domain features (Hjorth, Differential Entropy, Sample Entropy) and frequency domain features (Power Spectral Density), are extracted. Then, to learn the spatial and contextual information, a convolution neural network, inspired by GoogleNet with inception structure, was adopted to capture the intrinsic spatial relationship of EEG electrodes and contextual information, respectively. Fully connected layers are used for feature fusion, instead of the SoftMax function, SVM is selected to classify the high-level emotion features. Finally, to evaluate the proposed method, we conduct leave-one-subject-out EEG emotion recognition experiments on the DEAP dataset, and the experiment results show that the proposed method achieves excellent performance and average emotion recognition accuracies of 80.52% and 75.22% in the valence and arousal classification tasks of the DEAP database, respectively.

Journal ArticleDOI
TL;DR: This paper extends the CBoW (continuous bag-of-words) word vector model and proposes a cross-domain sentiment aware word embedding learning model, which can capture the sentiment information and domain relevance of a word at the same time.
Abstract: Learning low-dimensional vector representations of words from a large corpus is one of the basic tasks in natural language processing (NLP). The existing universal word embedding model learns word vectors mainly through grammar and semantic information from the context, while ignoring the sentiment information contained in the words. Some approaches, although they model sentiment information in the reviews, do not consider certain words in different domains. In a case where the emotion changes, if the general word vector is directly applied to the review sentiment analysis task, then this will inevitably affect the performance of the sentiment classification. To solve this problem, this paper extends the CBoW (continuous bag-of-words) word vector model and proposes a cross-domain sentiment aware word embedding learning model, which can capture the sentiment information and domain relevance of a word at the same time. This paper conducts several experiments on Amazon user review data in different domains to evaluate the performance of the model. The experimental results show that the proposed model can obtain a nearly 2% accuracy improvement compared with the general word vector when modeling only the sentiment information of the context. At the same time, when the domain information and the sentiment information are both included, the accuracy and Macro-F1 value of the sentiment classification tasks are significantly improved compared with existing sentiment word embeddings.

Journal ArticleDOI
TL;DR: The proposed ESAUC protocol improves fairness by achieving residual energy balance among the sensor nodes and enhances the network lifetime by reducing the overall energy consumption and improves the stability of the cluster by optimally adjusting the number of common channels.
Abstract: The problem of energy efficiency in cognitive radio sensor networks (CRSN) is mainly caused by the limited energy of sensor nodes and other channel-related operations for data transmission The unequal clustering method should be considered for balancing the energy consumption among the cluster heads (CHs) for prolonging the network lifetime The CH selection should consider the number of accessible free channels for efficient channel assignment To improve fairness, the channel assignment problem should consider energy consumption among the cluster members Furthermore, the relay metric for the selection of the best next-hop should consider the stability of the link for improving the transmission time The CH rotation for cluster maintenance should be energy and spectrum aware With regard to the above objectives, this paper proposes an energy and spectrum aware unequal clustering (ESAUC) protocol that jointly overcomes the limitations of energy and spectrum for maximizing the lifetime of CRSN Our proposed ESAUC protocol improves fairness by achieving residual energy balance among the sensor nodes and enhances the network lifetime by reducing the overall energy consumption Deep Belief Networks algorithm is exploited to predict the spectrum holes ESAUC improves the stability of the cluster by optimally adjusting the number of common channels ESAUC uses a CogAODV based routing mechanism to perform inter-cluster forwarding Simulation results show that the proposed scheme outperforms the existing CRSN clustering algorithms in terms of residual energy, Network Lifetime, secondary user–primary user Interference Ratio, Route Discovery Frequency, throughput, Packet Delivery Ratio, and end-to-end delay

Journal ArticleDOI
TL;DR: Experimental results show that this method can effectively improve the clustering accuracy of incomplete sampled data, at the same time it can reduce the sensitivity of the anomaly detection model to the selection of traffic feature, and has a better tolerance for poor-quality traffic sampled data.
Abstract: The 5G network provides higher bandwidth and lower latency for edge IoT devices to access the core business network. But at the same time, it also expands the attack surface of the core network, which makes the enterprise network face greater security threats. To protect the security of core business, the network infrastructure must be able to recognize not only the known abnormal traffic, but also new emerging threats. Intrusion Detection Systems (IDSs) are widely used to protect the core network against external intrusions. Most of the existing research works design anomaly detection models for a specific set of traffic attributes. In fact, it is difficult for us to find the specific correspondence between traffic attributes and attack behaviors. Worse, some traffic attributes will be missing in the IoT environment, which further increases the difficulty of anomaly analysis. In traditional solutions, the missing attributes are usually filled with zero or mean values. Sometimes, the attributes are directly discarded. Both of these methods may result in lower detection accuracy. To solve this problem, we propose an intrusion detection method based on multiple-kernel clustering (MKC) algorithms. Be different from zero value filling and mean value filling, the proposed method completes the absent traffic property through similarity calculation. Experimental results show that this method can effectively improve the clustering accuracy of incomplete sampled data, at the same time it can reduce the sensitivity of the anomaly detection model to the selection of traffic feature, and has a better tolerance for poor-quality traffic sampled data.

Journal ArticleDOI
TL;DR: A framework of parallel multi-objective Non-dominated Sorting Genetic Algorithms (NSGA-II) for exploring a Pareto set of non-dominated solutions and verifying that the algorithms presented in this new framework outperform the state-of-the-art algorithms.
Abstract: There are few studies in the literature to address the multi-objective multi-label feature selection for the classification of video data using evolutionary algorithms Selecting the most appropriate subset of features is a significant problem while maintaining/improving the accuracy of the prediction results This study proposes a framework of parallel multi-objective Non-dominated Sorting Genetic Algorithms (NSGA-II) for exploring a Pareto set of non-dominated solutions The subsets of non-dominated features are extracted and validated by multi-label classification techniques, Binary Relevance (BR), Classifier Chains (CC), Pruned Sets (PS), and Random k-Labelset (RAkEL) Base classifiers such as Support Vector Machines (SVM), J48-Decision Tree (J48), and Logistic Regression (LR) are performed in the classification phase of the algorithms Comprehensive experiments are carried out with local feature descriptors extracted from two multi-label data sets, the well-known MIR-Flickr dataset and a Wireless Multimedia Sensor (WMS) dataset that we have generated from our video recordings The prediction accuracy levels are improved by 636% and 257% for the MIR-Flickr and WMS datasets respectively while the number of features is significantly reduced The results verify that the algorithms presented in this new framework outperform the state-of-the-art algorithms

Journal ArticleDOI
TL;DR: In this article, a three-way-based forward greedy algorithm was proposed to find multiple reducts from a family of ordered granularities, where the reduct related to the previous granularity may offer the guidance for computing a reduct relevant to the current granularity.
Abstract: Most existing results about attribute reduction are reported by considering one and only one granularity, especially for the strategies of searching reducts. Nevertheless, how to derive reduct from multi-granularity has rarely been taken into account. One of the most important advantages of multi-granularity based attribute reduction is that it is useful in investigating the variation of the performances of reducts with respect to different granularities. From this point of view, the concept of Sequential Granularity Attribute Reduction (SGAR) is systemically studied in this paper. Different from previous attribute reductions, the aim of SGAR is to find multiple reducts which are derived from a family of ordered granularities. Assuming that a reduct related to the previous granularity may offer the guidance for computing a reduct related to the current granularity, the idea of the three-way is introduced into the searching of sequential granularity reduct. The three different ways in such process are: (1) the reduct related to the previous granularity is precisely the reduct related to the current granularity; (2) the reduct related to the previous granularity is not the reduct related to the current granularity; (3) the reduct related to the previous granularity is possible to be the reduct related to the current granularity. Therefore, a three-way based forward greedy searching is designed to calculate the sequential granularity reduct. The main advantage of our strategy is that the number of times to evaluate the candidate attributes can be reduced. Experimental results over 12 UCI data sets demonstrate the following: (1) three-way based searching is superior to some state-of-the-art acceleration algorithms in time consumption of deriving reducts; (2) the sequential granularity reducts obtained by proposed three-way based searching will provide well-matched classification performances. This study suggests new trends concerning the problem of attribute selection.