scispace - formally typeset
Search or ask a question

Showing papers in "Cluster Computing in 2019"


Journal ArticleDOI
TL;DR: An overview of deep learning methodologies, including restricted Bolzmann machine-based deep belief network, deep neural network, and recurrent neuralnetwork, as well as the machine learning techniques relevant to network anomaly detection are presented.
Abstract: A great deal of attention has been given to deep learning over the past several years, and new deep learning techniques are emerging with improved functionality. Many computer and network applications actively utilize such deep learning algorithms and report enhanced performance through them. In this study, we present an overview of deep learning methodologies, including restricted Bolzmann machine-based deep belief network, deep neural network, and recurrent neural network, as well as the machine learning techniques relevant to network anomaly detection. In addition, this article introduces the latest work that employed deep learning techniques with the focus on network anomaly detection through the extensive literature survey. We also discuss our local experiments showing the feasibility of the deep learning approach to network traffic analysis.

538 citations


Journal ArticleDOI
TL;DR: In this analysis paper, what is Blockchain?
Abstract: Any online transaction that involves digital money is a bit of a challenge these days with the rising threats of hackers trying to steal bank details posted online. This leads to the invention of various kinds of crypto-currency, Bitcoin being one of them. The technology behind using the Bitcoin is popularly called as Blockchain. Blockchain is a digitized, de-centralized, public ledger of all crypto-currency transaction/s. Blockchain tries to create and share all the online transactions, stored in a distributed ledger, as a data structure on a network of computers. It validates the transactions using peer-to-peer network of computers. It allows users to make and verify transactions immediately without a central authority. Blockchain is a transaction database which contains information about all the transactions ever executed in the past and works on Bitcoin protocol. In this analysis paper we discussed what is Blockchain?, SWOT analysis of BC, Types of BC and how Blockchain works along with its advantages and disadvantages.

264 citations


Journal ArticleDOI
TL;DR: A novel Chaotic Particle Swarm Optimization (CPSO) algorithm has been proposed to optimize the control points of Bézier curve and it is proved that the proposed algorithm is capable of finding the optimal path.
Abstract: Path planning algorithms have been used in different applications with the aim of finding a suitable collision-free path which satisfies some certain criteria such as the shortest path length and smoothness; thus, defining a suitable curve to describe path is essential. The main goal of these algorithms is to find the shortest and smooth path between the starting and target points. This paper makes use of a Bezier curve-based model for path planning. The control points of the Bezier curve significantly influence the length and smoothness of the path. In this paper, a novel Chaotic Particle Swarm Optimization (CPSO) algorithm has been proposed to optimize the control points of Bezier curve, and the proposed algorithm comes in two variants: CPSO-I and CPSO-II. Using the chosen control points, the optimum smooth path that minimizes the total distance between the starting and ending points is selected. To evaluate the CPSO algorithm, the results of the CPSO-I and CPSO-II algorithms are compared with the standard PSO algorithm. The experimental results proved that the proposed algorithm is capable of finding the optimal path. Moreover, the CPSO algorithm was tested against different numbers of control points and obstacles, and the CPSO algorithm achieved competitive results.

174 citations


Journal ArticleDOI
TL;DR: The characteristics of convolution neural network are used to avoid the feature extraction process, reduce the number of parameters needs to be trained, and finally achieve the purpose of unsupervised learning.
Abstract: Due to the complexity issue of the hand gesture recognition feature extraction, for example the variation of the light and background. In this paper, the convolution neural network is applied to the recognition of gestures, and the characteristics of convolution neural network are used to avoid the feature extraction process, reduce the number of parameters needs to be trained, and finally achieve the purpose of unsupervised learning. Error back propagation algorithm, is loaded into the convolution neural network algorithm, modify the threshold and weights of neural network to reduce the error of the model. In the classifier, the support vector machine that is added to optimize the classification function of the convolution neural network to improve the validity and robustness of the whole model.

161 citations


Journal ArticleDOI
TL;DR: With the implementation of the incremental process, training meetings, the need for large-scale data storage space, result in slow training, the online learning algorithm based on VSVM can solve the problem.
Abstract: In view of the long execution time and low execution efficiency of Support Vector Machine in large-scale training samples, the paper has proposed the online incremental and decremental learning algorithm based on variable support vector machine (VSVM). In deep understanding of the operation mechanism and correlation algorithms for VSVM, each sample has increased training datasets changes and it needs to update the classifier of learning algorithm. Firstly, they are given the online growth amount of learning algorithm taken full advantage of the incremental pre-calculated information, and doesn’t require retraining for the new incremental training datasets. Secondly, the incremental matrix inverse calculation process had greatly reduced the running time of algorithm, and it is given in order to verify out the validity of the online learning algorithm. Finally, the nine groups of datasets in the standard library have been selected in the pattern classification experiment. The experimental results are shown that the online learning algorithm given in the case to ensure the correct classification rates and effective training’s speed. With the implementation of the incremental process, training meetings, the need for large-scale data storage space, result in slow training, the online learning algorithm based on VSVM can solve the problem.

140 citations


Journal ArticleDOI
TL;DR: This paper proposes an optimization function on the basis of support vector machine (SVM) that is used in the genetic algorithm (GA) for selecting the more significant features to get heart disease.
Abstract: Heart disease diagnosis is found to be a challenging issue which can offer a computerized estimate about the level of heart disease so that supplementary action can be made easy. Thus, heart disease diagnosis has expected massive attention worldwide among the healthcare environment. Optimization algorithms played a significant role in heart disease diagnosis with good efficiency. The objective of this paper is to propose an optimization function on the basis of support vector machine (SVM). This objective function is used in the genetic algorithm (GA) for selecting the more significant features to get heart disease. The experimental results of the GA–SVM are compared with the various existing feature selection algorithms such as Relief, CFS, Filtered subset, Info gain, Consistency subset, Chi squared, One attribute based, Filtered attribute, Gain ratio, and GA. The receiver operating characteristic analysis is performed to evaluate the good performance of SVM classifier. The proposed framework is demonstrated in the MATLAB environment with a dataset collected from Cleveland heart disease database.

136 citations


Journal ArticleDOI
TL;DR: A novel image recognition and navigation system which provides precise and quick messages in the form of audio to visually challenged people so that they can navigate easily is proposed.
Abstract: Most of the advancements are now carried out by interconnecting physical devices with computers; this is what known as Internet of Things (IoT). The major problems facing by blind people fall in the category of navigating through indoor and outdoor environments consisting of various obstacles and recognition of person in front of them. Identification of objects or person only with perceptive and audio information is difficult. An intelligent, portable, less expensive, self-contained navigation and face recognition system is highly demanded for blind people. This helps blind people to navigate with the help of a Smartphone, global positioning system (GPS) and a system equipped with ultrasonic sensors. Face recognition can be done using neural learning techniques with feature extraction and training modules. The images of friends, relatives are stored in the database of user Smartphone. Whenever a person comes in front of the blind user, the application with the help of neural network gives the voice aid to the user. Thus this system can replace the regular imprecise use of guide dogs as well as white sticks to help the navigation and face recognition process for people with impaired vision.In this paper, we have proposed a novel image recognition and navigation system which provides precise and quick messages in the form of audio to visually challenged people so that they can navigate easily. The performance of the proposed method is comparatively analyzed with the help of ROC analysis.

134 citations


Journal ArticleDOI
TL;DR: The deep belief network (DBN) method of deep learning is used to extract the features and classification and is an emerging research area, because of the generation of large volume of data.
Abstract: Content-based image retrieval (CBIR) uses image content features to search and retrieve digital images from a large database A variety of visual feature extraction techniques have been employed to implement the searching purpose Due to the computation time requirement, some good algorithms are not been used The retrieval performance of a content-based image retrieval system crucially depends on the feature representation and similarity measurements The ultimate aim of the proposed method is to provide an efficient algorithm to deal with the above mentioned problem definition Here the deep belief network (DBN) method of deep learning is used to extract the features and classification and is an emerging research area, because of the generation of large volume of data The proposed method is tested through simulation in comparison and the results show a huge positive deviation towards its performance

131 citations


Journal ArticleDOI
TL;DR: The proposed hybrid approach by combining genetic algorithm (GA) for feature optimization with deep neural network (DNN) for classification and defect prediction performs better when compared with other techniques.
Abstract: In the field of early prediction of software defects, various techniques have been developed such as data mining techniques, machine learning techniques. Still early prediction of defects is a challenging task which needs to be addressed and can be improved by getting higher classification rate of defect prediction. With the aim of addressing this issue, we introduce a hybrid approach by combining genetic algorithm (GA) for feature optimization with deep neural network (DNN) for classification. An improved version of GA is incorporated which includes a new technique for chromosome designing and fitness function computation. DNN technique is also improvised using adaptive auto-encoder which provides better representation of selected software features. The improved efficiency of the proposed hybrid approach due to deployment of optimization technique is demonstrated through case studies. An experimental study is carried out for software defect prediction by considering PROMISE dataset using MATLAB tool. In this study, we have used the proposed novel method for classification and defect prediction. Comparative study shows that the proposed approach of prediction of software defects performs better when compared with other techniques where 97.82% accuracy is obtained for KC1 dataset, 97.59% accuracy is obtained for CM1 dataset, 97.96% accuracy is obtained for PC3 dataset and 98.00% accuracy is obtained for PC4 dataset.

124 citations


Journal ArticleDOI
TL;DR: According to the cloud image information, it can be judged that the binocular vision system can effectively segment the gesture from the complex background.
Abstract: A convenient and effective binocular vision system is set up Gesture information can be accurately extract from the complex environment with the system The template calibration method is used to calibrate the binocular camera and the parameters of the camera are accurately obtained In the phase of stereo matching, the BM algorithm is used to quickly and accurately match the images of the left and right cameras to get the parallax of the measured gesture Combined with triangulation principle, resulting in a more dense depth map Finally, the depth information is remapped to the original color image to realize three-dimensional reconstruction and three-dimensional cloud image generation According to the cloud image information, it can be judged that the binocular vision system can effectively segment the gesture from the complex background

122 citations


Journal ArticleDOI
TL;DR: Results show the efficiency of the proposed Tabu PSO by enhancing the number of clusters formed, percentage of nodes alive and shows the reduction of average packet loss rate and average end to end delay.
Abstract: The advent of sensors that are light in weight, small-sized, low power and are enabled by wireless network has led to growth of wireless sensor networks (WSNs) in multiple areas of applications. The key problems faced in WSNs are decreased network lifetime and time delay in transmission of data. In many critical applications such as military and monitoring the eco system, disaster management, etc., data routing is very crucial. Multi hop low-energy adaptive clustering hierarchy protocol has been proposed in literature but is proved to be inefficient. Cluster head optimization is a NP hard. This paper deals with selection of optimal path in routing which improves network lifespan, as well as network’s energy efficiency. Various meta-heuristic techniques particularly particle swarm optimization (PSO) has been effectively used but with poor local optima problem. The proposed method is on the basis of PSO as well as Tabu search algorithms. Results show the efficiency of the proposed Tabu PSO by enhancing the number of clusters formed, percentage of nodes alive and shows the reduction of average packet loss rate and average end to end delay.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed fusion scheme for muti-modal medical images does not only produce better results by successfully fusing the different images, but also ensures an improvement in the various quantitative parameters as compared to other existing methods.
Abstract: This paper proposed a novel fusion scheme for muti-modal medical images that utilizes both the features of the multi-scale transformation and deep convolutional neural network. Firstly, the source images are decomposed by the Gauss-Laplace filter and Gaussian filter into several sub-images in the first layer of network. Then, HeK-based method is used to initialize the convolution kernel of the rest layers, construct the basic unit, and use the back propagation algorithm to train the basic unit; Train multiple basic units that are sacked with the thought of SAE to get the deep stacked neural network; the proposed network is adopted to decompose the input images to obtain their own high frequency and low frequency images, and combine the our fusion rule to fuse the two high frequency and low frequency images, and put them back to the last layer of the network to get the final fusion images. The performance of our proposed fusion method is evaluated by conducting several experiments on the different medical image datasets. Experimental results demonstrate that our proposed method does not only produce better results by successfully fusing the different images, but also ensures an improvement in the various quantitative parameters as compared to other existing methods. In addition, the speed of our improved CNN method is much faster than that of comparison algorithms which have good fusion quality.

Journal ArticleDOI
TL;DR: The results show that the machine learning algorithms can able to produce highly accurate diabetes predictive healthcare systems, and Hadoop cluster based distributed computing framework supports in efficient processing and storing of extremely large datasets in cloud environment.
Abstract: Health care systems are merely designed to meet the needs of increasing population globally. People around the globe are affected with different types of deadliest diseases. Among the different types of commonly existing diseases, diabetes is a major cause of blindness, kidney failure, heart attacks, etc. Health care monitoring systems for different diseases and symptoms are available all around the world. The rapid development in the fields of Information and Communication Technologies made remarkable improvements in health care systems. Various Machine Learning algorithms are proposed which automates the working model of health care systems and enhances the accuracy of disease prediction. Hadoop cluster based distributed computing framework supports in efficient processing and storing of extremely large datasets in cloud environment. This work proposes the novel implementation of machine learning algorithms in hadoop based clusters for diabetes prediction. The results show that the machine learning algorithms can able to produce highly accurate diabetes predictive healthcare systems. Pima Indians Diabetes Database from National Institute of Diabetes and Digestive Diseases is used to evaluate the working of algorithm.

Journal ArticleDOI
TL;DR: This paper focuses on solving VM placement problem with respect to the available bandwidth which is formulated as variable sized bin packing problem and a new bandwidth allocation policy is developed and hybridized with an improved variant of whales optimization algorithm (WOA) called improved Lévy based whale optimization algorithm.
Abstract: The consolidation of virtual machine (VM) is the strategy of efficient and intelligent use of cloud datacenters resources. One of the important subproblems of VM consolidation is VM placement problem. The main objective of VM placement problem is to minimize the number of running physical machines or hosts in cloud datacenters. This paper focuses on solving VM placement problem with respect to the available bandwidth which is formulated as variable sized bin packing problem. Moreover, a new bandwidth allocation policy is developed and hybridized with an improved variant of whale optimization algorithm (WOA) called improved Levy based whale optimization algorithm. Cloudsim toolkit is used in order to test the validity of the proposed algorithm on 25 different data sets that generated randomly and compared with many optimization algorithms including: WOA, first fit, best fit, particle swarm optimization, genetic algorithm, and intelligent tuned harmony search. The obtained results are analyzed by Friedman test which indicates the prosperity of the proposed algorithm for minimizing the number of running physical machine.

Journal ArticleDOI
TL;DR: The proposed fast recognition method for fire image has proposed by introducing color space information into Scale Invariant Feature Transform (SIFT) algorithm has better application prospects than Kim’s method, Dimitropoulos's method and Sumei's method in terms of recognition accuracy and algorithm's running speed.
Abstract: For existed problems on fire detection fields, the traditional recognition methods on fire usually based on sensor’s signals are easily affected by the external environment elements. Meanwhile, most of the current methods based on feature extraction of fire image are less discriminative to different scene and fire type, and have lower recognition precision if the fire scene and type change. To overcome the drawback on fire recognition, the new fast recognition method for fire image has proposed by introducing color space information into Scale Invariant Feature Transform (SIFT) algorithm. Firstly, the feature descriptors of fire are extracted by SIFT algorithm from the fire images which are obtained from internet databases. Secondly, the local noisy feature points are filtered by introducing the feature information of fire color space. Thirdly, the feature descriptors are transformed into feature vectors, and then Incremental Vector Support Vector Machine classifier is utilized to establish the fast fire recognition model. The experiments are conducted on real-life fire image from internet. The experimental results had shown that for different fire scenes and types, the proposed algorithm has outperformed Kim’s method, Dimitropoulos’s method and Sumei’s method in terms of recognition accuracy and algorithm’s running speed. The proposed algorithm has better application prospects than Kim’s method, Dimitropoulos’s method and Sumei’s method.

Journal ArticleDOI
TL;DR: The proposed framework enables a smooth human–robot interaction that supports the efficient implementation of the chatbot healthcare service and proposes a chatbot-based healthcare service with a knowledge base for cloud computing.
Abstract: With the recent increase in the interest of individuals in health, lifecare, and disease, hospital medical services have been shifting from a treatment focus to prevention and health management. The medical industry is creating additional services for health- and life-promotion programs. This change represents a medical-service paradigm shift due to the prolonged life expectancy, aging, lifestyle changes, and income increases, and consequently, the concept of the smart health service has emerged as a major issue. Due to smart health, the existing health-promotion medical services that typically have been operated by large hospitals have been developing into remote medical-treatment services where personal health records are used in small hospitals; moreover, a further expansion has been occurring in the direction of u-Healthcare in which health conditions are continuously monitored in the everyday lives of the users. However, as the amount of data is increasing and the medical-data complexity is intensifying, the limitations of the previous approaches are increasingly problematic; furthermore, since even the same disease can show different symptoms depending on the personal health conditions, lifestyle, and genome information, universal healthcare is not effective for some patients, and it can even generate severe side effects. Thus, research on the AI-based healthcare that is in the form of mining-based smart health, which is a convergence technology of the 4IR, is actively being carried out. Particularly, the introduction of various smart medical equipment for which healthcare big data and a running machine have been combined and the expansion of the distribution of smartphone wearable devices have led to innovations such as personalized diagnostic and treatment services and chronic-disease management and prevention services. In addition, various already launched applications allow users to check their own health conditions and receive the corresponding feedback in real time. Based on these innovations, the preparation of a way to determine a user’s current health conditions, and to respond properly through contextual feedback in the case of unsound health conditions, is underway. However, since the previously made healthcare-related applications need to be linked to a wearable device, and they provide medical feedback to users based solely on specific biometric data, inaccurate information can be provided. In addition, the user interfaces of some healthcare applications are very complicated, causing user inconvenience regarding the attainment of desired information. Therefore, we propose a chatbot-based healthcare service with a knowledge base for cloud computing. The proposed method is a mobile health service in the form of a chatbot for the provision of fast treatment in response to accidents that may occur in everyday life, and also in response to changes of the conditions of patients with chronic diseases. A chatbot is an intelligent conversation platform that interacts with users via a chatting interface, and since its use can be facilitated by linkages with the major social network service messengers, general users can easily access and receive various health services. The proposed framework enables a smooth human–robot interaction that supports the efficient implementation of the chatbot healthcare service. The design of the framework comprises the following four levels: data level, information level, knowledge level, and service level.

Journal ArticleDOI
TL;DR: In this article, the authors analyze the characteristics of network security and security problems, and discuss the system framework of Internet security and some key security technologies, including key management, authentication and access control, routing security, privacy protection, intrusion detection and fault tolerance and intrusion etc.
Abstract: The open deployment environment and limited resources of the Internet of things (IoT) make it vulnerable to malicious attacks, while the traditional intrusion detection system is difficult to meet the heterogeneous and distributed features of the Internet of things. The security and privacy protection of IoT is directly related to the practical application of IoT. In this paper, We analyze the characteristics of networking security and security problems, and discuss the system framework of Internet security and some key security technologies, including key management, authentication and access control, routing security, privacy protection, intrusion detection and fault tolerance and intrusion etc. This paper introduces the current problems of IoT in network security, and points out the necessity of intrusion detection. Several kinds of intrusion detection technologies are discussed, and its application on IoT architecture is analyzed. We compare the application of different intrusion detection technologies, and make a prospect of the next phase of research. Using data mining and machine learning methods to study network intrusion technology has become a hot issue. A single class feature or a detection model is very difficult to improve the detection rate of network intrusion detection. The performance of the proposed model is validated through the public databases.

Journal ArticleDOI
TL;DR: The experimental results show that the algorithm can effectively reduce the calculation time while ensuring the recognition rate, and the performance of the algorithm is slightly better than KNN-SRC algorithm.
Abstract: The sparse representation classification method has been widely concerned and studied in pattern recognition because of its good recognition effect and classification performance. Using the minimized $$l_{1}$$ norm to solve the sparse coefficient, all the training samples are selected as the redundant dictionary to calculate, but the computational complexity is higher. Aiming at the problem of high computational complexity of the $$l_{1}$$ norm based solving algorithm, $$l_{2}$$ norm local sparse representation classification algorithm is proposed. This algorithm uses the minimum $$l_{2}$$ norm method to select the local dictionary. Then the minimum $$l_{1}$$ norm is used in the dictionary to solve sparse coefficients for classify them, and the algorithm is used to verify the gesture recognition on the constructed gesture database. The experimental results show that the algorithm can effectively reduce the calculation time while ensuring the recognition rate, and the performance of the algorithm is slightly better than KNN-SRC algorithm.

Journal ArticleDOI
TL;DR: In the proposed work, feature selection algorithm process is implemented for text categorization using the algorithms ant colony optimization (ACO) and artificial neural network (ANN) and proved its efficiency.
Abstract: Feature selection is the approach of choosing subset of given dataset based on some feature. It can be used to minimize dimensions of the huge data set. So that it removes unnecessary data in the data source and produces prediction or output accurately in big data analytics. In the proposed work, feature selection algorithm process is implemented for text categorization using the algorithms ant colony optimization (ACO) and artificial neural network (ANN). This hybrid approach simulated using Reuter’s data set and proved its efficiency.

Journal ArticleDOI
TL;DR: The proposed research paper has taken the metrology of monitoring the flow rate in an industrial piping system towards a boiler as the case study and real time implementation achieved with the help of Labview and MyDAQ environment and results clearly indicate a fast computation time with low complexity overhead which is achieved withThe help of cloud distribution.
Abstract: Scientific metrology is one evergreen and Omni present field that has been continuously experiencing a great deal of research in developing new measurement benchmarks to cope up with the real time advancement in the current market. This has been greatly aided in recent times with the advent of intelligent computing networks and communication protocols through which there has been a great deal of migration towards cloud based services on a demand basis. The essential feature of cloud is the provision of quality service to the clients. The proposed research paper has taken the metrology of monitoring the flow rate in an industrial piping system towards a boiler as the case study and real time implementation achieved with the help of Labview and MyDAQ environment. The data from this DAQ is interpreted into the cloud network with subset reduction and regrouping based on features using a fuzzy c means clustering approach. The experimentation has been done for varying values of tuning constants in order to maintain constant flow and compared with existing research contributions. The results clearly indicate a fast computation time with low complexity overhead which is achieved with the help of cloud distribution.

Journal ArticleDOI
TL;DR: A new strategy of online replica deduplication (SORD), achieving to reduce the impact on other nodes when deleting a redundant replica, which obtains superior performances in access latency around 5–15% on average and better load balance than other similar methods.
Abstract: In online Cloud-P2P system, more replicas can lead to lower access delay but more maintenance overhead and vice versa. The traditional strategies of online replica deduplication usually utilize the method of dynamic threshold to delete the redundant replicas. Since the replicas access amount has varied over time, and every replica can bear a certain amount of requests, the replica of being deleted may impact on other nodes, lead to these nodes overload, deteriorating the system performance. But this impact is not paid enough attention in the traditional strategy. To deal with the problem, this paper proposes a new strategy of online replica deduplication (SORD), achieving to reduce the impact on other nodes when deleting a redundant replica. In order to reduce the impact, SORD adopts the method of prediction evaluation to delete the redundant replica. Before deleting a replica, it applies the method of fuzzy clustering analysis to get the optimal deletion replica from the file’s replica set. Based on the historical visiting information of the optimal deletion replica and the capacity of nodes, SORD evaluates the impact on other nodes to decide whether a replica can be deleted. Extensive experiments demonstrate that SORD obtains superior performances in access latency around 5–15% on average and better load balance than other similar methods. Meanwhile, it can remove about 65% redundant replicas.

Journal ArticleDOI
TL;DR: A survey of related work in the very specialized field of IS assurance for the IoT develops a taxonomy of typical attacks against IoT assets (with special attention to IoT device protection), and proposes applying the Security Intelligence approach.
Abstract: Keeping up with the burgeoning Internet of Things (IoT) requires staying up to date on the latest network attack trends in dynamic and complicated cyberspace, and take them into account while developing holistic information security (IS) approaches for the IoT. Due to multiple vulnerabilities in the IoT foundations, many targeted attacks are continuing to evolve. This survey of related work in the very specialized field of IS assurance for the IoT develops a taxonomy of typical attacks against IoT assets (with special attention to IoT device protection). Based on this taxonomy, the key directions for countering these attacks are defined. According to the modern demand for the IoT and big IS-related data processing, we propose applying the Security Intelligence approach. The results obtained, when compared with the related work and numerous analogues, are based on the following research methodology: view the IoT as a security object to be protected, leading to understanding its vulnerabilities and possible attacks against the IoT exploiting these vulnerabilities, and from there approaches to protecting the IoT. A few areas of the future research, among which the IoT operational resilience and usage of the blockchain technology seem to us the most interesting, are indicated.

Journal ArticleDOI
TL;DR: In this paper, intrusion detection technology is applied to block chain information security model, and the results show that proposed model has higher detection efficiency and fault tolerance.
Abstract: Block chain is a decentralized core architecture, which is widely used in emerging digital encryption currencies. It has attracted much attention and has been researched with the gradual acceptance of bitcoin. Block chaining technology has the characteristics of centralization, block data, no tampering and trust, so it is sought after by enterprises, especially financial institutions. This paper expounds the core technology principle of block chain technology, discusses the application of block chain technology, the existing regulatory problems and security problems, so as to provide some help for the related research of block chain technology. Intrusion detection is an important way to protect the security of information systems. It has become the focus of security research in recent years. This paper introduces the history and current situation of intrusion detection system, expounds the classification of intrusion detection system and the framework of general intrusion detection, and discusses all kinds of intrusion detection technology in detail. Intrusion detection technology is a kind of security technology to protect network resources from hacker attack. IDS is a useful supplement to the firewall, which can help the network system to quickly detect attacks and improve the integrity of the information security infrastructure. In this paper, intrusion detection technology is applied to block chain information security model, and the results show that proposed model has higher detection efficiency and fault tolerance.

Journal ArticleDOI
TL;DR: This paper proposes a novel cloud-assisted privacy-preserving profile-matching scheme under multiple keys based on a proxy re-encryption scheme with additive homomorphism that is secure under the honest-but-curious (HBC) model given two non-colluding cloud servers.
Abstract: Making new friends by measuring the proximity of people’s profile is a crucial service in mobile social networks With the rapid development of cloud computing, outsourcing computing and storage to the cloud is now an effective way to relieve the heavy burden on users for managing and processing data To prevent privacy leakage, data owners tend to encrypt their private data before outsourcing However, current solutions either have heavy interactions or require users to encrypt private data with a single key In this paper, we propose a novel cloud-assisted privacy-preserving profile-matching scheme under multiple keys based on a proxy re-encryption scheme with additive homomorphism Our scheme is secure under the honest-but-curious (HBC) model given two non-colluding cloud servers

Journal ArticleDOI
TL;DR: This work puts forward an energy-efficient task scheduling algorithm (ETSA) to address the demerits associated with task consolidation and scheduling, and provides an elegant trade-off between energy efficiency and makespan than the existing algorithms.
Abstract: The massive growth of cloud computing leads to huge amounts of energy consumption and release of carbon footprints as data centers are housed by a large number of servers. Consequently, the cloud service providers are looking for eco-friendly solutions to reduce energy consumption and carbon emissions. As a result, task scheduling has drawn attention, in which efficient resource utilization and minimum energy consumption take into great consideration. This is an exigent issue, especially for the heterogeneous environment. In this work, we put forward an energy-efficient task scheduling algorithm (ETSA) to address the demerits associated with task consolidation and scheduling. The proposed algorithm ETSA takes into account the completion time and total utilization of a task on the resources, and follows a normalization procedure to make a scheduling decision. We evaluate the proposed algorithm ETSA to measure energy efficiency and makespan in the heterogeneous environment. The experimental results are compared with recent algorithms, namely random, round robin, dynamic cloud list scheduling, energy-aware task consolidation, energy-conscious task consolidation and MaxUtil. The proposed algorithm ETSA provides an elegant trade-off between energy efficiency and makespan than the existing algorithms.

Journal ArticleDOI
TL;DR: Assessment of the applications online for vulnerabilities at regular intervals and if any changes are made in the code, Webhook will trigger the vulnerability checking tool based on Hashing algorithm to check for vulnerabilities in the updated application.
Abstract: Cloud computing is a very rapidly growing technology with more facilities but also with more issues in terms of vulnerabilities before and after deploying the applications into the cloud The vulnerabilities are assessed before the applications are deployed into the cloud However, after deploying the applications, periodical checking of systems for vulnerabilities is not carried out This paper assesses the applications online for vulnerabilities at regular intervals and if any changes are made in the code, Webhook will trigger the vulnerability checking tool based on Hashing algorithm to check for vulnerabilities in the updated application The main aim of this system is to constantly scan the applications that are deployed in the cloud and check for vulnerabilities as part of the continuous integration and continuous deployment process This process of checking for vulnerabilities after every update in the application should be included in the software development lifecycle

Journal ArticleDOI
TL;DR: The proposed task scheduling algorithm called W-Scheduler based on the multi-objective model and the whale optimization algorithm can optimally schedule the tasks to the virtual machines while maintaining the minimum makespan and cost.
Abstract: One of the important steps in cloud computing is the task scheduling. The task scheduling process needs to schedule the tasks to the virtual machines while reducing the makespan and the cost. Number of scheduling algorithms are proposed by various researchers for scheduling the tasks in cloud computing environments. This paper proposes the task scheduling algorithm called W-Scheduler based on the multi-objective model and the whale optimization algorithm (WOA). Initially, the multi-objective model calculates the fitness value by calculating the cost function of the central processing unit (CPU) and the memory. The fitness value is calculated by adding the makespan and the budget cost function. The proposed task scheduling algorithm with the whale optimization algorithm can optimally schedule the tasks to the virtual machines while maintaining the minimum makespan and cost. Finally, we analyze the performance of the proposed W-Scheduler with the existing methods, such as PBACO, SLPSO-SA, and SPSO-SA for the evaluation metrics makespan and cost. From the experimental results, we conclude that the proposed W-Scheduler can optimally schedule the tasks to the virtual machines while having the minimum makespan of 7 and minimum average cost of 5.8.

Journal ArticleDOI
TL;DR: An expert recommendation algorithm based on Pearson correlation coefficient and FP-growth can effectively recommend a kind of expert group with the highest efficiency of collaborative review, which solves the problem of how to recommend efficient expert combination accurately for drawing inspecting system.
Abstract: In order to recommend an efficient drawing inspecting expert combination, an expert combination is selected by an expert recommendation algorithm based on Pearson’s correlation coefficient and FP-growth. By introducing the Pearson correlation coefficient and the FP-growth association rule algorithm, the expert recommendation algorithm can accurately select the participating experts in the historical project similar to the scale of the project to be reviewed, and combine the experts to calculate and obtain the expert group with the highest fit, namely, the expert combination of project to be reviewed. This expert recommendation algorithm based on Pearson correlation coefficient and FP-growth can effectively recommend a kind of expert group with the highest efficiency of collaborative review, which solves the problem of how to recommend efficient expert combination accurately for drawing inspecting system.

Journal ArticleDOI
TL;DR: A modified adaptive orthogonal matching pursuit algorithm which estimates the initial value of sparsity by matching test, and will decrease the number of subsequent iterations to improve recognition accuracy and efficiency comparing with other greedy algorithms.
Abstract: Aiming at the disadvantages of greedy algorithms in sparse solution, a modified adaptive orthogonal matching pursuit algorithm (MAOMP) is proposed in this paper. It is obviously improved to introduce sparsity and variable step size for the MAOMP. The algorithm estimates the initial value of sparsity by matching test, and will decrease the number of subsequent iterations. Finally, the step size is adjusted to select atoms and approximate the true sparsity at different stages. The simulation results show that the algorithm which has proposed improves the recognition accuracy and efficiency comparing with other greedy algorithms.

Journal ArticleDOI
TL;DR: This enhanced J48 algorithm is seen to help in an effective detection of probable attacks which could jeopardise the network confidentiality and showed a better, accurate and more efficient performance without using the above-mentioned features when compared to the feature selection procedure.
Abstract: In this paper, we have developed an enhanced J48 algorithm, which uses the J48 algorithm for improving the detection accuracy and the performance of the novel IDS technique. This enhanced J48 algorithm is seen to help in an effective detection of probable attacks which could jeopardise the network confidentiality. For this purpose, the researchers used many datasets by integrating different approaches like the J48, Naive Bayes, Random Tree and the NB-Tree. An NSL KDD intrusion dataset was applied while carrying out all experiments. This dataset was divided into 2 datasets, i.e., training and testing, which was based on the data processing. Thereafter, a feature selection method based on the WEKA application was used for evaluating the efficacy of all the features. The results obtained suggest that this algorithm showed a better, accurate and more efficient performance without using the above-mentioned features when compared to the feature selection procedure. An implementation of this algorithm guaranteed the dataset classification based on a detection accuracy of 99.88% for all the features when using the 10-fold cross-validation test, a 90.01% accuracy for the supplied test set after using the complete test datasets along with all the features and a 76.23% accuracy for supplying the test set after using the test-21 dataset along with all features.