scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Ambient Intelligence and Humanized Computing in 2020"


Journal ArticleDOI
TL;DR: A Fermatean fuzzy TOPSIS method is established to fix multiple criteria decision-making problem and an interpretative example is stated in details to justify the elaborated method and to illustrate its viability and usefulness.
Abstract: In this paper, we propose Fermatean fuzzy sets. We compare Fermatean fuzzy sets with Pythagorean fuzzy sets and intuitionistic fuzzy sets. We focus on complement operator of Fermatean fuzzy sets. We find out the fundamental set of operations for the Fermatean fuzzy sets. We define score function and accuracy function for ranking of Fermatean fuzzy sets. In addition, we also study Euclidean distance between two Fermatean fuzzy sets. Later, we establish a Fermatean fuzzy TOPSIS method to fix multiple criteria decision-making problem. Ultimately, an interpretative example is stated in details to justify the elaborated method and to illustrate its viability and usefulness.

346 citations


Journal ArticleDOI
TL;DR: The present study uses principal component analysis based deep neural network model using Grey Wolf Optimization (GWO) algorithm to classify the extracted features of diabetic retinopathy dataset and shows that the proposed model offers better performance compared to the traditional machine learning algorithms.
Abstract: Diabetic retinopathy is a prominent cause of blindness among elderly people and has become a global medical problem over the last few decades. There are several scientific and medical approaches to screen and detect this disease, but most of the detection is done using retinal fungal imaging. The present study uses principal component analysis based deep neural network model using Grey Wolf Optimization (GWO) algorithm to classify the extracted features of diabetic retinopathy dataset. The use of GWO enables to choose optimal parameters for training the DNN model. The steps involved in this paper include standardization of the diabetic retinopathy dataset using a standardscaler normalization method, followed by dimensionality reduction using PCA, then choosing of optimal hyper parameters by GWO and finally training of the dataset using a DNN model. The proposed model is evaluated based on the performance measures namely accuracy, recall, sensitivity and specificity. The model is further compared with the traditional machine learning algorithms—support vector machine (SVM), Naive Bayes Classifier, Decision Tree and XGBoost. The results show that the proposed model offers better performance compared to the aforementioned algorithms.

151 citations


Journal ArticleDOI
TL;DR: The main contribution of the proposed method is to detect IoT botnet attacks launched form compromised IoT devices by exploiting the efficiency of a recent swarm intelligence algorithm called Grey Wolf Optimization algorithm (GWO) to optimize the hyperparameters of the OCSVM and at the same time to find the features that best describe the IoT botnets problem.
Abstract: Recently, the number of Internet of Things (IoT) botnet attacks has increased tremendously due to the expansion of online IoT devices which can be easily compromised. Botnets are a common threat that takes advantage of the lack of basic security tools in IoT devices and can perform a series of Distributed Denial of Service (DDoS) attacks. Developing new methods to detect compromised IoT devices is urgent in order to mitigate the negative consequences of these IoT botnets since the existing IoT botnet detection methods still present some issues, such as, relying on labelled data, not being validated with newer botnets, and using very complex machine learning algorithms. Anomaly detection methods are promising for detecting IoT botnet attacks since the amount of available normal data is very large. One of the powerful algorithms that can be used for anomaly detection is One Class Support vector machine (OCSVM). The efficiency of the OCSVM algorithm depends on several factors that greatly affect the classification results such as the subset of features that are used for training OCSVM model, the kernel type, and its hyperparameters. In this paper, a new unsupervised evolutionary IoT botnet detection method is proposed. The main contribution of the proposed method is to detect IoT botnet attacks launched form compromised IoT devices by exploiting the efficiency of a recent swarm intelligence algorithm called Grey Wolf Optimization algorithm (GWO) to optimize the hyperparameters of the OCSVM and at the same time to find the features that best describe the IoT botnet problem. To prove the efficiency of the proposed method, its performance is evaluated using typical anomaly detection evaluation measures over a new version of a real benchmark dataset. The experimental results show that the proposed method outperforms all other algorithms in terms of true positive rate, false positive rate, and G-mean for all IoT device types. Also, it achieves the lowest detection time, while significantly reducing the number of selected features.

144 citations


Journal ArticleDOI
TL;DR: Results show that the proposed framework can improve the previous methods with comparability considering the reliability of information using Z-numbers and is more flexible comparing with previous work.
Abstract: Environmental assessment and decision making is complex leading to uncertainty due to multiple criteria involved with uncertain information. Uncertainty is an unavoidable and inevitable element of any environmental evaluation process. The published literatures rarely include the studies on uncertain data with variable fuzzy reliabilities. This research has proposed an environmental evaluation framework based on Dempster–Shafer theory and Z-numbers. Of which a new notion of the utility of fuzzy number is proposed to generate the basic probability assignment of Z-numbers. The framework can effectively aggregate uncertain data with different fuzzy reliabilities to obtain a comprehensive evaluation measure. The proposed model has been applied to two case studies to illustrate the proposed framework and show its effectiveness in environmental evaluations. Results show that the proposed framework can improve the previous methods with comparability considering the reliability of information using Z-numbers. The proposed method is more flexible comparing with previous work.

133 citations


Journal ArticleDOI
TL;DR: This paper defines some new operational laws by Dombi t-norm and t-conorm and develops an algorithm by using spherical fuzzy set information in decision-making matrix that is suitable and effective for decision process to evaluate their best alternative.
Abstract: Spherical fuzzy sets (SFSs), recently proposed by Ashraf, is one of the most important concept to describe the fuzzy information in the process of decision making. In SFSs the sum of the squares of memberships grades lies in close unit interval and hence accommodate more uncertainties. Thus, this set outperforms over the existing structures of fuzzy sets. In real decision making problems, there is often a treat regarding a neutral character towards the membership and non-membership degrees expressed by the decision-makers. To get a fair decision during the process, in this paper, we define some new operational laws by Dombi t-norm and t-conorm. In the present study, we propose Spherical fuzzy Dombi weighted averaging (SFDWA), Spherical fuzzy Dombi ordered weighted averaging (SFDOWA), Spherical fuzzy Dombi hybrid weighted averaging (SFDHWA), Spherical fuzzy Dombi weighted geometric (SFDWG), Spherical fuzzy Dombi ordered weighted geometric (SFDOWG) and Spherical fuzzy Dombi hybrid weighted geometric (SFDHWG) aggregation operators and discuss several properties of these aggregation operators. These aforesaid operators are enormously used to help a successful solution of the decision problems. Then an algorithm by using spherical fuzzy set information in decision-making matrix is developed and applied the algorithm to decision-making problem to illustrate its applicability and effectiveness. Through this algorithm, we proved that our proposed approach is practical and provides decision makers a more mathematical insight before making decisions on their options. Besides this, a systematic comparison analysis with other existent methods is conducted to reveal the advantages of our method. Results indicate that the proposed method is suitable and effective for decision process to evaluate their best alternative.

118 citations


Journal ArticleDOI
TL;DR: Algorithms on social media and financial news data are used to discover the impact of this data on stock market prediction accuracy for ten subsequent days and Random forest classifier is found to be consistent and highest accuracy is achieved by its ensemble.
Abstract: Accurate stock market prediction is of great interest to investors; however, stock markets are driven by volatile factors such as microblogs and news that make it hard to predict stock market index based on merely the historical data. The enormous stock market volatility emphasizes the need to effectively assess the role of external factors in stock prediction. Stock markets can be predicted using machine learning algorithms on information contained in social media and financial news, as this data can change investors’ behavior. In this paper, we use algorithms on social media and financial news data to discover the impact of this data on stock market prediction accuracy for ten subsequent days. For improving performance and quality of predictions, feature selection and spam tweets reduction are performed on the data sets. Moreover, we perform experiments to find such stock markets that are difficult to predict and those that are more influenced by social media and financial news. We compare results of different algorithms to find a consistent classifier. Finally, for achieving maximum prediction accuracy, deep learning is used and some classifiers are ensembled. Our experimental results show that highest prediction accuracies of 80.53% and 75.16% are achieved using social media and financial news, respectively. We also show that New York and Red Hat stock markets are hard to predict, New York and IBM stocks are more influenced by social media, while London and Microsoft stocks by financial news. Random forest classifier is found to be consistent and highest accuracy of 83.22% is achieved by its ensemble.

104 citations


Journal ArticleDOI
TL;DR: AlphaLogger is developed and evaluated - an Android-based application that infers the alphabet keys being typed on a soft keyboard that can be inferred with an accuracy of 90.2% using accelerometer, gyroscope, and magnetometer.
Abstract: Due to the advancement in technologies and excessive usability of smartphones in various domains (e.g., mobile banking), smartphones became more prone to malicious attacks.Typing on the soft keyboard of a smartphone produces different vibrations, which can be abused to recognize the keys being pressed, hence, facilitating side-channel attacks. In this work, we develop and evaluate AlphaLogger- an Android-based application that infers the alphabet keys being typed on a soft keyboard. AlphaLogger runs in the background and collects data at a frequency of 10Hz/sec from the smartphone hardware sensors (accelerometer, gyroscope and magnetometer) to accurately infer the keystrokes being typed on the soft keyboard of all other applications running in the foreground. We show a performance analysis of the different combinations of sensors. A thorough evaluation demonstrates that keystrokes can be inferred with an accuracy of 90.2% using accelerometer, gyroscope, and magnetometer.

96 citations


Journal ArticleDOI
TL;DR: A new, secure, and efficient scheme based on blockchain technology and attribute-based encryption entitled “MedSBA” to record and store medical data is provided, indicating that the proposed scheme protects user privacy and allows fine-grain access control of medical patient data based on General Data Protection Regulation (GDPR).
Abstract: The development of Electronic Information Technology has made the Electronic Medical Record a commonly used approach to recording and categorizing medical patient data in databases of different hospitals and medical entities so that controlling the shared data is not possible for patients at all. The importance of medical data as possessions of people and the system leads us to be concerned about its security, privacy, and accessibility. How to store and controlling access to medical information is of the most important challenges in the electronic health area. The present paper provides a new, secure, and efficient scheme based on blockchain technology and attribute-based encryption entitled “MedSBA” to record and store medical data, indicating that our proposed scheme protects user privacy and allows fine-grain access control of medical patient data based on General Data Protection Regulation (GDPR). Private blockchains are used in MedSBA to improve the right to revoke instant access which is of the attribute-based encryption challenges. The security and functionality of our proposed scheme are proved within a formal model and based on BAN logic, respectively; simulating the MedSBA scheme in the OPNET software as well as examining its computational complexity and storage indicates the efficiency of the present scheme.

94 citations


Journal ArticleDOI
TL;DR: Two different ensemble deep transfer learning models have been designed for COVID-19 diagnosis utilizing the chest X-rays and reveal that the proposed framework outperforms the existing techniques in terms of sensitivity, specificity, and accuracy.
Abstract: The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes novel coronavirus disease (COVID-19) outbreak in more than 200 countries around the world. The early diagnosis of infected patients is needed to discontinue this outbreak. The diagnosis of coronavirus infection from radiography images is the fastest method. In this paper, two different ensemble deep transfer learning models have been designed for COVID-19 diagnosis utilizing the chest X-rays. Both models have utilized pre-trained models for better performance. They are able to differentiate COVID-19, viral pneumonia, and bacterial pneumonia. Both models have been developed to improve the generalization capability of the classifier for binary and multi-class problems. The proposed models have been tested on two well-known datasets. Experimental results reveal that the proposed framework outperforms the existing techniques in terms of sensitivity, specificity, and accuracy.

89 citations


Journal ArticleDOI
TL;DR: An energy effective routing protocol, low energy adaptive clustering hierarchy (LEACH) along with optimization algorithm genetic algorithm (GA) has been presented and comparison between proposed work and existing work is performed to determine the efficiency of the proposed work.
Abstract: Wireless sensor network (WSN) comprises of huge amount of sensing nodes being used for collecting the data in different situations. WSN finds applications mostly to gather information from remote places such as in environment monitoring, military, in transportation security and so on. The main problem in WSN is the availability of limited energy resources. For enhancing energy efficiency with the lifespan of sensor nodes, in this research, an energy effective routing protocol, low energy adaptive clustering hierarchy (LEACH) along with optimization algorithm genetic algorithm (GA) has been presented. LEACH is a hierarchical protocol which converts the sensor nodes into cluster heads (CH), and CH gathers and compress the data and send it to the target node. Genetic algorithm helps to find the optimal route by using its fitness function. After simulating the code in MATLAB, the energy consumption rate up to 17.39% has been reduced when GA is used. At last, comparison between proposed work and existing work is performed to determine the efficiency of the proposed work.

87 citations


Journal ArticleDOI
TL;DR: The results show that the proposed solution improves the quality-of-service in the cloud/fog computing environment in terms of the allocation cost and reduce the response time and the LBOS is an efficient way to establish the resource utilization and ensure the continuous service.
Abstract: Fog computing (FC) can be considered as a computing paradigm which performs Internet of Things (IoT) applications at the edge of the network. Recently, there is a great growth of data requests and FC which lead to enhance data accessibility and adaptability. However, FC has been exposed to many challenges as load balancing (LB) and adaptation to failure. Many LB strategies have been proposed in cloud computing, but they are still not applied effectively in fog. LB is an important issue to achieve high resource utilization, avoid bottlenecks, avoid overload and low load, and reduce response time. In this paper, a LB and optimization strategy (LBOS) using dynamic resource allocation method based on Reinforcement learning and genetic algorithm is proposed. LBOS monitors the traffic in the network continuously, collects the information about each server load, handles the incoming requests, and distributes them between the available servers equally using dynamic resource allocation method. Hence, it enhances the performance even when it’s the peak time. Accordingly, LBOS is simple and efficient in real-time systems in fog computing such as in the case of healthcare system. LBOS is concerned with designing an IoT-Fog based healthcare system. The proposed IoT-Fog system consists of three layers, namely: (1) IoT layer, (2) fog layer, and (3) cloud layer. Finally, the experiments are carried out and the results show that the proposed solution improves the quality-of-service in the cloud/fog computing environment in terms of the allocation cost and reduce the response time. Comparing the LBOS with the state-of-the-art algorithms, it achieved the best load balancing Level (85.71%). Hence, LBOS is an efficient way to establish the resource utilization and ensure the continuous service.

Journal ArticleDOI
TL;DR: The fuzzy brain-storm optimization algorithm for medical image segmentation and classification was proposed, a combination of fuzzy and brain-storms optimization techniques, and it seems promising and outperforms the other techniques with better results in this analysis.
Abstract: Brain tumor is the most severe nervous system disorder and causes significant damage to health and leads to death. Glioma was a primary intracranial tumor with the most elevated disease and death rate. One of the most widely used medical imaging techniques for brain tumors is magnetic resonance imaging (MRI), which has turned out the principle diagnosis system for the treatment and analysis of glioma. The brain tumor segmentation and classification process was a complicated task to perform. Several problems could be more effectively and efficiently solved by the swarm intelligence technique. In this paper, the fuzzy brain-storm optimization algorithm for medical image segmentation and classification was proposed, a combination of fuzzy and brain-storm optimization techniques. Brain-storm optimization concentrates on the cluster centers and provides them the highest priority; it might fall in local optima like any other swarm algorithm. The fuzzy perform several iterations to present an optimal network structure, and the brain-storm optimization seems promising and outperforms the other techniques with better results in this analysis. The BRATs 2018 dataset was used, and the proposed FBSO was efficient, robust and mainly reduced the segmentation duration of the optimization algorithm with the accuracy of 93.85%, precision of 94.77%, the sensitivity of 95.77%, and F1 score of 95.42%.

Journal ArticleDOI
TL;DR: A non-dominated sorting genetic algorithm-III (NSGA-III) based 4-D chaotic map is designed, and a novel master-slave model for image encryption is designed to improve the computational speed of the proposed approach.
Abstract: Chaotic maps are extensively utilized in the field of image encryption to generate secret keys. However, these maps suffer from hyper-parameters tuning issues. These parameters are generally selected on hit and trial basis. However, inappropriate selection of these parameters may reduce the performance of chaotic maps. Also, these hyper-parameters are not sensitive to input images. Therefore, in this paper, to handle these issues, a non-dominated sorting genetic algorithm-III (NSGA) based 4-D chaotic map is designed. Additionally, to improve the computational speed of the proposed approach, we have designed a novel master-slave model for image encryption. Initially, computationally expensive operations such as mutation and crossover of NSGA-III are identified. Thereafter, NSGA-III parameters are split among two jobs, i.e., master and slave jobs. For communication between master and slave nodes, the message passing interface is used. Extensive experimental results reveal that the proposed image encryption technique outperforms the existing techniques in terms of various performance measures.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed fragile watermarking technique capable of tamper detection and localization in medical/general images has lesser computational complexity when compared to other state-of-art techniques.
Abstract: With the exponential rise of multimedia technology and networked infrastructure, electronic healthcare is coming up a big way. One of the most important challenges in an electronic healthcare setup is the authentication of medical images, received by an expert at a far-off location from the sender. With an aim to address the critical authentication issue, this paper presents a fragile watermarking technique capable of tamper detection and localization in medical/general images. We divide the cover image into 4 × 4 non overlapping pixel blocks; with each block further sub-divided into two 4 × 2 blocks, called as Upper Half Block (UHB) and Lower Half Block (LHB). The information embedded in LHB facilitates tamper detection while as that embedded in UHB facilities tamper localization. The experimental results show that, in addition to tamper detection and localization capability, the proposed technique has lesser computational complexity when compared to other state-of-art techniques. Further, the proposed scheme results in average PSNR of 51.26 dB for a payload of one bit per pixel (1bpp) indicating that the watermarked images obtained are of high visual quality.

Journal ArticleDOI
TL;DR: A novel congestion control mechanism based on cluster routing is introduced to reduce energy consumption throughout the network and reduces the end to end delay to improve network life time for large simulation period.
Abstract: A wireless sensor network is designed to facilitate various real time applications, constituting a wide range of sensor nodes. In order to provide energy efficient transmissions, a novel congestion control mechanism is proposed on optimized rate. Here, the rate-based congestion control algorithm based on cluster routing is introduced to reduce energy consumption throughout the network. Rate control process reduces the end to end delay to improve network life time for large simulation period. Initially, nodes are clustered by the hybrid K-means and Greedy best first search algorithms. After that, the rate control is performed using firefly optimization strategy which is suitable for high packet delivery ratio. Finally, packets are sent with maximum throughput using Ant Colony Optimization-based routing. The simulation is performed on the MATLAB simulation platform. Finally, performances are evaluated with respect to, average delay in end to end node, delivery ratio of packets, throughput, energy efficiency, energy consumption and reliability.

Journal ArticleDOI
TL;DR: The semantic image segmentation based on a feature fusion model with context features layer-by-layer with better mean Intersection Over Union than the state-of-the-art works is proposed.
Abstract: The context information of images had been lost due to the low resolution of features, and due to repeated combinations of max-pooling layer and down-sampling layer. When the feature extraction process had been performed using a convolutional network, the result of semantic image segmentation loses sensitivity to the location of the object. The semantic image segmentation based on a feature fusion model with context features layer-by-layer had been proposed. Firstly, the original images had been pre-processed by the Gaussian Kernel Function to generate a series of images with different resolutions to form an image pyramid. Secondly, inputting an image pyramid into the network structure in which the plurality of fully convolutional network was been combined in parallel to obtain a set of initial features with different granularities by expanding receptive fields using Atrous Convolutions, and the initialization of feature fusion with different layer-by-layer granularities in a top-down method. Finally, the score map of feature fusion model had been calculated and sent to the conditional random field, modeling the class correlations between image pixels of the original image by the fully connected conditional random field, and the spatial position information and color vector information of image pixels were jointed to optimize and obtain results. The experiments on the PASCAL VOC 2012 and PASCAL Context datasets had achieved better mean Intersection Over Union than the state-of-the-art works. The proposed method has about 6.3% improved to the conventional methods.

Journal ArticleDOI
TL;DR: This research mainly focuses on the development of deep learning based computer-aided system to detect, classify and segment the cancerous region in mammograms, and the preprocessing mechanism is proposed that remove noise, artifacts and muscle region that can cause a high false positive rate.
Abstract: Image data in healthcare is playing a vital role. Medical data records are increasing rapidly, which is beneficial and detrimental at the same time. Large Image dataset are difficult to handle, extracting information, and machine learning. The mammograms data used in this research are low range x-ray images of the breast region, which contains abnormalities. Breast cancer is the most frequently diagnosed cancer and ranked 9th worldwide in breast cancer-related deaths. In Pakistan 1 in 9 women expected to have breast cancer at some stage in life. Screening mammography is the most effective means for its early detection. This high rate of oversampling is responsible for billions in excess health care cost and unnecessary patient anxiety. This research mainly focuses on the development of deep learning based computer-aided system to detect, classify and segment the cancerous region in mammograms. Moreover, the preprocessing mechanism is proposed that remove noise, artifacts and muscle region that can cause a high false positive rate. In order to increase the efficiency of the system and counter the large resource requirement, the pre-processed image is converted to 512 × 512 patches. The two publicly available breast cancer dataset are employed i.e. Mammographic Image Analysis Society (MIAS) digital mammogram dataset and Curated Breast Imaging Subset of (Digital Database for Screening Mammography) (CBIS-DDSM). The two states of art deep learning-based instance segmentation frameworks are used, i.e. DeepLab and Mask RCNN. The pre-processing algorithm helps to increase the area under the receiver operating curve for each transfer learning method. The fine tuning is performed for better performance, the area under the curve was equal to 0.98 and 0.95 for mask RCNN and deep lab respectively on a test set of 150 cases. However, mean average precision for the segmentation task is 0.80 and 0.75. The radiologists accuracy ranged from 0.80 to 0.88. The proposed research has the potential to help radiologists with breast mass classification as well as segmentation of the cancerous region.

Journal ArticleDOI
TL;DR: The proposed architecture consolidates an evolving network standard named as software defined networking in internet of vehicles which enables it to handle highly dynamic networks in an abstract way by dividing the data plane from the control plane.
Abstract: Proposing an optimal routing protocol for internet of vehicles with reduced overhead has endured to be a challenge owing to the incompetence of the current architecture to manage flexibility and scalability. The proposed architecture, therefore, consolidates an evolving network standard named as software defined networking in internet of vehicles. Which enables it to handle highly dynamic networks in an abstract way by dividing the data plane from the control plane. Firstly, road-aware routing strategy is introduced: a performance-enhanced routing protocol designed specifically for infrastructure-assisted vehicular networks. In which roads are divided into road segments, with road side units for multi-hop communication. A unique property of the proposed protocol is that it explores the cellular network to relay control messages to and from the controller with low latency. The concept of edge controller is introduced as an operational backbone of the vehicle grid in internet of vehicles, to have a real-time vehicle topology. Last but not least, a novel mathematical model is estimated which assists primary controller in a way to find not only a shortest but a durable path. The results illustrate the significant performance of the proposed protocol in terms of availability with limited routing overhead. In addition, we also found that edge controller contributes mainly to minimizes the path failure in the network.

Journal ArticleDOI
TL;DR: A light-weight application, CatchPhish which predicts the URL legitimacy without visiting the website, using hostname, full URL, Term Frequency-Inverse Document Frequency (TF-IDF) features and phish-hinted words from the suspicious URL for the classification using the Random forest classifier.
Abstract: There exists many anti-phishing techniques which use source code-based features and third party services to detect the phishing sites. These techniques have some limitations and one of them is that they fail to handle drive-by-downloads. They also use third-party services for the detection of phishing URLs which delay the classification process. Hence, in this paper, we propose a light-weight application, CatchPhish which predicts the URL legitimacy without visiting the website. The proposed technique uses hostname, full URL, Term Frequency-Inverse Document Frequency (TF-IDF) features and phish-hinted words from the suspicious URL for the classification using the Random forest classifier. The proposed model with only TF-IDF features on our dataset achieved an accuracy of 93.25%. Experiment with TF-IDF and hand-crafted features achieved a significant accuracy of 94.26% on our dataset and an accuracy of 98.25%, 97.49% on benchmark datasets which is much better than the existing baseline models.

Journal ArticleDOI
TL;DR: From the analysis, it shows that the proposed wavelet-based secret image sharing scheme with encrypted shadow images using optimal Homomorphic Encryption (HE) technique provide greater security compared to other existing schemes.
Abstract: Secret Image Sharing (SIS) scheme is to encrypt a secret image into ‘n’ specious shadows. It is unable to reveal any data on the secret image if at least one of the shadows is not achieved. In this paper, wavelet-based secret image sharing scheme is proposed with encrypted shadow images using optimal Homomorphic Encryption (HE) technique. Initially, Discrete Wavelet Transform (DWT) is applied on the secret image to produce sub bands. From this process, multiple shadows are created, encrypted and decrypted for each shadow. The encrypted shadow can be recovered just by choosing some subset of these ‘n’ shadows that makes transparent and stack over each other. To improve the shadow security, each shadow is encrypted and decrypted using HE technique. For the concern on image quality, the new Oppositional based Harmony Search (OHS) algorithm was utilized to generate the optimal key. From the analysis, it shows that the proposed scheme provide greater security compared to other existing schemes.

Journal ArticleDOI
TL;DR: For the issues that the accurate and rapid sentiment analysis of comment texts in the network big data environment, a text sentiment analysis method combining Bag of Words (CBOW) language model and deep learning is proposed.
Abstract: For the issues that the accurate and rapid sentiment analysis of comment texts in the network big data environment, a text sentiment analysis method combining Bag of Words (CBOW) language model and deep learning is proposed. First, a vector representation of text is constructed by a CBOW language model based on feedforward neural networks. Then, the Convolutional Neural Network (CNN) is trained through the labeled training set to capture the semantic features of the text. Finally, the Dropout strategy is introduced in the Softmax classifier of traditional CNN, which can effectively prevent the model from over-fitting and has better classification ability. Experimental results on COAE2014 and IMDB datasets show that this method can accurately determine the emotional category of the text and is robust, the accuracy on the two datasets reached 90.5% and 87.2%, respectively.

Journal ArticleDOI
TL;DR: Improved ACO algorithm based on PSO algorithm can overcome disadvantages of the traditional ACO algorithms, such as falling into local extremum, poor quality, and low accuracy.
Abstract: The motion control of autonomous underwater vehicle (AUV) has got more and more attention because AUV has been used in many applications in recent years. In order to find the optimal path for AUV to reach the specified destination in complex undersea environment, an improved ant colony optimization (ACO) algorithm based on particle swarm optimization (PSO) algorithm is proposed. Due to the various constraints, such as the limited energy and limited visual distance, the improved ACO algorithm uses improved pheromone update rule and heuristic function based on PSO algorithm to make AUV find the optimal path by connecting the chosen nodes of the undersea environment while avoiding the collision with the complex undersea terrain (static obstacles). The improved ACO algorithm based on PSO algorithm can overcome disadvantages of the traditional ACO algorithm, such as falling into local extremum, poor quality, and low accuracy. Experiment results demonstrate that improved ACO algorithm is more effective and feasible in path planning for autonomous underwater vehicle than the traditional ant colony algorithm.

Journal ArticleDOI
TL;DR: A service management platform for the Polish Billiards and Snooker Association (PBSA), based on a real-time system located in the cloud, using the Salesforce platform to explore the meaning and the role of cloud computing for the real- time service systems efficient functioning.
Abstract: Recently, we have witnessed unprecedented use of cloud computing and its services. It is influencing the way software is built, as well as company’ resources such as servers, workstations or generally hardware are used. This paper aims to examine the benefits of cloud usage to support real-time service systems, using the Salesforce platform. First, we explore the meaning and the role of cloud computing for the real-time service systems efficient functioning. Then, we build a service management platform for the Polish Billiards and Snooker Association (PBSA), based on a real-time system located in the cloud. This way, PBSA managers are able to complete their tasks in this system on-demand. Moreover, it is set up as a private cloud to grant access only to the snooker organization employees.

Journal ArticleDOI
TL;DR: Three novel steps to enrich the performance of conventional FCM algorithm in CPU with single instruction multiple data model was used with the hybrid CPU–GPU implementation in the GPU machine to accelerate the medical image segmentation.
Abstract: Fuzzy C-Means (FCM) plays a major role in brain tissue segmentation. The proposed method aims to implements rapid brain tissue segmentation from MRI human head scans using FCM in CPU and GPU. This method is known as FCM-GENIUS. This paper presents three novel steps to enrich the performance of conventional FCM algorithm in CPU. There are region of interest (ROI) selection, knowledge based initialization and knowledge based optimization. The ROI selection is a preprocessing step contains brain extraction and bounding box processes. An automatic knowledge based initialization to FCM algorithm using histogram smoothing for centroids selection from middle slice of the given MRI brain volume. Optimization helps to improve the computation speed up of FCM algorithm using MRI slice adjacency property. The materials used for the proposed work are gathering from internet brain segmentation repository (IBSR). The accuracy of segmentation also compared with traditional and existing methods. The proposed method yield equal segmentation accuracy compared with existing methods but reduces the segmentation time considerably up to seven times and average number of iterations up to three times. In addition, parallel FCM implements in GPU machine and the performance was compared with the conventional FCM in CPU. The single instruction multiple data (SIMD) model was used with the hybrid CPU–GPU implementation in the GPU machine to accelerate the medical image segmentation.

Journal ArticleDOI
TL;DR: A chaos-based cryptographic algorithm using Walsh–Hadamard transform and chaotic maps for encrypting images shows that the random chaotic ranges and complex behaviours of chaotic maps improved both the keyspace and security of image encryption–decryption system.
Abstract: The third party misuse and manipulation of digital images is a treat to security and privacy of human subjects. Image encryption in the internet of things era becomes more important with edge computing and growth in intelligent consumer electronic devices. In this paper, we report a chaos-based cryptographic algorithm using Walsh–Hadamard transform and chaotic maps for encrypting images. The images are processed channel-wise and two different chaotic maps called Arnold and Tent maps are used for enciphering. The experimental results show that the random chaotic ranges and complex behaviours of chaotic maps improved both the keyspace and security of image encryption–decryption system.

Journal ArticleDOI
TL;DR: This model determines the optimum location-allocation and inventory management decisions and aims to minimize the total cost of the supply chain includes fixed costs, operating costs, inventory holding costs, wastage costs, and transportation costs along with minimizing the substitution levels to provide safer blood transfusion services.
Abstract: Based on the uncertain conditions such as uncertainty in blood demand and facility disruptions, and also, due to the uncertain nature of blood products such as perishable lifetime, distinct blood groups, and ABO-Rh(D) compatibility and priority rules among these groups, this paper aims to contribute blood supply chains under uncertainty. In this respect, this paper develops a bi-objective two-stage stochastic programming model for managing a red blood cells supply chain that observes above-mentioned issues. This model determines the optimum location-allocation and inventory management decisions and aims to minimize the total cost of the supply chain includes fixed costs, operating costs, inventory holding costs, wastage costs, and transportation costs along with minimizing the substitution levels to provide safer blood transfusion services. To handle the uncertainty of the blood supply chain environment, a robust optimization approach is devised to tackle the uncertainty of parameters, and the TH method is utilized to make the bi-objective model solvable. Then, a real case study of Mashhad city, in Iran, is implemented to demonstrate the model practicality as well as its solution approaches, and finally, the computational results are presented and discussed. Further, the impacts of the different parameters on the results are analyzed which help the decision makers to select the value of the parameters more accurately.

Journal ArticleDOI
TL;DR: The proposed heterogeneous cluster based secure routing scheme provides trust based secure network for detection of attacks such as wormhole and black hole caused by malicious nodes presence in wireless Adhoc network.
Abstract: In wireless, every device can moves anywhere without any infrastructure also the information can be maintained constantly for routing the traffic. The open issues of wireless Adhoc network the attacks which are chosen the forwarding attack that is dropped by malicious node to corrupt the network performance then the information integrity exposure. Aim of the problem that existing methods in Adhoc network for malicious node detection which cannot assure the traceability of the node as well as the fairness of node detection. In this paper, the proposed heterogeneous cluster based secure routing scheme provides trust based secure network for detection of attacks such as wormhole and black hole caused by malicious nodes presence in wireless Adhoc network. The simulation result shows that the proposed model is detect the malicious nodes effectively in wireless Adhoc networks. The malicious node detection efficiency can be achieved 96% also energy consumption also 10% better than existing method.

Journal ArticleDOI
TL;DR: In this article, a machine learning and context-aware intrusion detection system was built to detect anomalies in the manufacturing process of a smart factory, which was effective to detection rate of anomaly signs and possibility of process achievement compared to the previous system.
Abstract: Digital transformation increasingly gains broad attentions from all the world and particularly studies on artificial intelligence, big data, cloud, and mobile are currently conducted. In addition, research based on ambient intelligence are also performed. Everything including condition information of all objects are shared on real time in AMI environment and all locations and objects are equipped with sensors. It acts intelligently such as decision-making. As sensors are equipped in locations and objects and connected with high-performance computer networks, users can receive information at any time and anywhere. In particular, the adoption of smart factory that turns all phases into automation and intellectualization based on cyber-physical system technology is proliferating. However, unexpected problems are likely to take place due to high complexity and uncertainty of smart factory. Thus, it is very likely to end manufacturing process, trigger malfunction, and leak important information. Although the necessity of analyzing threats to smart factory and systematic management is emphasized, there is insufficient research. In this paper, machine learning and context-aware intrusion detection system was built. The established system was effective to detection rate of anomaly signs and possibility of process achievement compared to the previous system.

Journal ArticleDOI
TL;DR: An efficient automatic fall detection system which is also fitted for the detection of different activities of daily living (ADL), and the resulting system incorporating CS capabilities is shown to achieve up to 99.8% of accuracy.
Abstract: The fall of elderly patients is still a critical medical issue since it can cause irreversible bone injuries due to the elderly bones weakness. To mitigate the likelihood of the occurrence of a fall, continuously tracking the patients with balance and health issues has been envisaged, despite being unpractical. To address this problem, we propose an efficient automatic fall detection system which is also fitted for the detection of different activities of daily living (ADL). The system relies on a wearable Shimmer device, to transmit some inertial signals via a wireless connection to a computer. Aiming at reducing the size of the transmitted data and minimizing the energy consumption, a compressive sensing (CS) method is applied. In this perspective, we started by creating our dataset from 17 subjects performing a set of movements, then three distinct systems were investigated: one which detects the presence or the absence of the fall, a second which detects static or dynamic movements including the fall, and a third which recognizes the fall and six other ADL activities. In the acquisition and classification steps, first only the data collected by the accelerometer are exploited, then a mixture of the accelerometer and gyroscope measurements are taken into consideration. The two configurations are compared and the resulting system incorporating CS capabilities is shown to achieve up to 99.8% of accuracy.

Journal ArticleDOI
TL;DR: This paper proposes a novel image-based descriptor called stacked dense flow difference image (SDFDI) capable of capturing the spatio-temporal information present in a video sequence and proposes a bidirectional gated recurrent unit (BiGRU) based recurrent neural network (RNN) to model skeleton data.
Abstract: Fusion of multiple modalities from different sensors is an important area of research for multimodal human action recognition. In this paper, we conduct an in-depth study to investigate the effect of different parameters like input preprocessing, data augmentation, network architectures and model fusion so as to come up with a practical guideline for multimodal action recognition using deep learning paradigm. First, for RGB videos, we propose a novel image-based descriptor called stacked dense flow difference image (SDFDI), capable of capturing the spatio-temporal information present in a video sequence. A variety of deep 2D convolutional neural networks (CNN) are then trained to compare our SDFDI against state-of-the-art image-based representations. Second, for skeleton stream, we propose data augmentation technique based on 3D transformations so as to facilitate training a deep neural network on small datasets. We also propose a bidirectional gated recurrent unit (BiGRU) based recurrent neural network (RNN) to model skeleton data. Third, for inertial sensor data, we propose data augmentation based on jittering with white Gaussian noise along with deep a 1D-CNN network for action classification. The outputs of all these three heterogeneous networks (1D-CNN, 2D-CNN and BiGRU) are combined by a variety of model fusion approach based on score and feature fusion. Finally, in order to illustrate the efficacy of the proposed framework, we test our model on a publicly available UTD-MHAD dataset, and achieved an overall accuracy of 97.91%, which is about 4% higher than using each modality individually. We hope that the discussions and conclusions from this work will provide a deeper insight to the researchers in the related fields, and provide avenues for further studies for different multi-sensor based fusion architectures.