scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Ambient Intelligence and Humanized Computing in 2021"


Journal ArticleDOI
TL;DR: The experimental results reveal that the proposed RSO algorithm is highly effective in solving real world optimization problems as compared to other well-known optimization algorithms.
Abstract: This paper presents a novel bio-inspired optimization algorithm called Rat Swarm Optimizer (RSO) for solving the challenging optimization problems. The main inspiration of this optimizer is the chasing and attacking behaviors of rats in nature. This paper mathematically models these behaviors and benchmarks on a set of 38 test problems to ensure its applicability on different regions of search space. The RSO algorithm is compared with eight well-known optimization algorithms to validate its performance. It is then employed on six real-life constrained engineering design problems. The convergence and computational analysis are also investigated to test exploration, exploitation, and local optima avoidance of proposed algorithm. The experimental results reveal that the proposed RSO algorithm is highly effective in solving real world optimization problems as compared to other well-known optimization algorithms. Note that the source codes of the proposed technique are available at: http://www.dhimangaurav.com .

207 citations


Journal ArticleDOI
TL;DR: A tracking algorithm based on features extracted by residual network called Resnet features and cascaded correlation filters to improve precision and accuracy is proposed and performs favorably against other state-of-the-art trackers.
Abstract: Significant progress is made in the field of object tracking recently. Especially, trackers based on deep learning and correlation filters both have achieved excellent performance. However, object tracking still faces some challenging problems such as deformation and illumination. In such kinds of situations, the accuracy and precision of tracking algorithms plunge as a result. It is imminent to find a solution to this situation. In this paper, we propose a tracking algorithm based on features extracted by residual network called Resnet features and cascaded correlation filters to improve precision and accuracy. Firstly, features extracted by a deep residual network trained on other image processing datasets, are robust enough and retain higher resolution, therefore, we exploit Resnet-101 pretrained offline to obtain features extracted by middle and high layers for target appearance model representation. Resnet-101 is deeper compared with other deep neural networks which means it contains more semantic information. Then, the method we propose to combine our correlation filters is superior. We propose cascaded correlation filters generated by handcraft, middle-level and high-level features from residual network to gain better competence. Handcraft features localize target precisely because they contain more spatial details while Resnet features are robust to the target appearance change because they retain more semantic information. Finally, we conduct extensive experiments on OTB2013 and OTB2015 benchmark. The experimental results show that our tracker achieves high performance under all kinds of challenges and performs favorably against other state-of-the-art trackers.

142 citations


Journal ArticleDOI
TL;DR: The results showed that the proposed GWOSVM-IDS with seven wolves overwhelms the other proposed and comparative algorithms.
Abstract: Intrusion in wireless sensor networks (WSNs) aims to degrade or even eliminating the capability of these networks to provide its functions In this paper, an enhanced intrusion detection system (IDS) is proposed by using the modified binary grey wolf optimizer with support vector machine (GWOSVM-IDS) The GWOSVM-IDS used 3 wolves, 5 wolves and 7 wolves to find the best number of wolves The proposed method aims to increase intrusion detection accuracy and detection rate and reduce processing time in the WSN environment through decrease false alarms rates, and the number of features resulted from the IDSs in the WSN environment Indeed, the NSL KDD’99 dataset is used to demonstrate the performance of the proposed method and compare it with other existing methods The proposed methods are evaluated in terms of accuracy, the number of features, execution time, false alarm rate, and detection rate The results showed that the proposed GWOSVM-IDS with seven wolves overwhelms the other proposed and comparative algorithms

118 citations


Journal ArticleDOI
TL;DR: The methods in this paper effectively improve the model’s robustness and crawl success rate by adding small-scale anchor values for small area grabbing target position detection.
Abstract: In order to ensure stable gripping performance of manipulator in a dynamic environment, a target object grab setting model based on the candidate region suggestion network is established with the multi-target object and the anchor frame generation measurement strategy overcoming external environmental interference factors such as mutual interference between objects and changes in illumination. In which, the success rate of model detection is improved by adding small-scale anchor values for small area grabbing target position detection. Further, 94.3% crawl detection success rate is achieved on the multi-target detection data sets using the information fusion of color image and depth image. The methods in this paper effectively improve the model’s robustness and crawl success rate.

106 citations


Journal ArticleDOI
TL;DR: An invariant feature based approach that performs low rate attack detection and improves the performance of the methods used in detecting low rate attacks for invariant network conditions.
Abstract: The problem of low rate attack detection has been well studied in different situations. However the methods suffer to achieve higher performance in low rate attack detection. The multimedia transmission is focused on transmitting video and audio which claims higher bandwidth conditions. There exists no such algorithm in detecting low rate attacks for invariant network conditions. To solve this issue, an invariant feature based approach is presented in this paper. The method maintains the network features like the routes, bandwidth conditions and traffic. Based on these features, a set of routes has been identified for each data transmission. Here, low rate attack detection is performed at the reception of any packet and the data transmission is performed using cooperative routing. From the packet features, and the route being followed, the method identifies the class of route, traffic and bandwidth conditions of the route. Using these features, the method computes Network Transmission Support measure. Based on the NTS value, the method performs low rate attack detection and improves the performance.

100 citations


Journal ArticleDOI
TL;DR: A Semantic Substance Extraction model using OpenCV is proposed for organizing video resources and results are 78% faster than content extraction using existing fuzzy and neural methods.
Abstract: With the rise of crimes all over the world, video surveillance is gaining more significance day by day. Presently, monitoring videos is done manually. If a crime occurs in a city, to find the sequence or event, it is necessary to play the entire video after which searching and processing needs to be done manually. Due to the lack of human resource, it is necessary to develop a new video analytics framework to perform higher level tasks in semantic content extraction. Manual processing of video is wasteful, one-sided, and more expensive thereby limiting the searching abilities. So it is necessary to model a framework for extracting objects from the video data. A Semantic Substance Extraction model using OpenCV is proposed for organizing video resources. Video analytics for semantic substance extraction is an effort to use real time, publicly available data to improve the prediction of the moving objects from the video streams. Background separation and Haar Cascade algorithms are used in this model to perform video analytics. Usage of this method has achieved a detection precision of 84.11% and a recall of 50.27%. These results are 78% faster than content extraction using existing fuzzy and neural methods.

99 citations


Journal ArticleDOI
TL;DR: Compared to state-of-art IoT-based farming methods, the CL-IoT reduces energy consumption, communication overhead, and end-to-end delay up to a certain extent and maximizes the network throughput.
Abstract: Internet of Things (IoT) for Intelligent Manufacturing of Smart Farming gained significant attention from researchers to automate various farming applications called Smart Farming (SF). The sensors and actuators deployed across the farm using which farmers receive periodic farm information related to temperature, soil moisture, light intensity, and water used, etc. The clustering-based methods are proven energy-efficient solutions for Wireless Sensor Networks (WSNs). However, by considering long-distance communications and scalable networks of IoT enabled SF; the present clustering solutions cannot be feasible and having higher delay and latency for various SF applications. To focus on requirements SF applications, an efficient and scalable protocol for remote monitoring and decision making of farms in rural regions called CL-IoT protocol proposed. A cross-layer-based clustering and routing algorithms have designed to reduce network communication delay, latency, and energy consumption. The cross-layer-based optimal Cluster Head (CH) selection solution proposed to overcome the energy asymmetry problem in WSN. The parameters of different layers like a physical, medium access control (MAC), and network layer of each sensor used to evaluate and select optimal CH and efficient data transmission. The nature-inspired algorithm proposed with a novel probabilistic decision rule functions as a fitness function to discover the optimal route for data transmission. The performance of the CL-IoT protocol analyzed using NS2 by considering the energy-efficiency, computational-efficiency, and QoS-efficiency factors. Compared to state-of-art IoT-based farming methods, the CL-IoT reduces energy consumption, communication overhead, and end-to-end delay up to a certain extent and maximizes the network throughput.

97 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used an integrated algorithm to determine the neural network input coefficients and compared the proposed algorithm with other algorithms such as ant colony and invasive weed optimization for performance evaluation.
Abstract: Artificial intelligence techniques are excessively used in computing for training, forecasting and evaluation purposes Among these techniques, artificial neural network (ANN) is widely used for developing prediction models ANNs use various Meta-heuristic algorithms including approximation methods for training the neural networks ANN plays a significant role in this area and can be helpful in determining the neural network input coefficient The main goal of presented study is to train the neural network using meta-heuristic approaches and to enhance the perceptron neural network precision In this article, we used an integrated algorithm to determine the neural network input coefficients Later, the proposed algorithm was compared with other algorithms such as ant colony and invasive weed optimization for performance evaluation The results reveal that the proposed algorithm results in more convergence with neural network coefficient as compared to existing algorithms However the proposed method resulted in reduction of prediction error in the neural network

93 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid methodology based on CRITIC and EDAS methods with Fermatean fuzzy sets (FFSs) is introduced to solve the S3PRLP selection problem in which the attributes and decision makers' weights are completely unknown.
Abstract: In today’s world, the demand for sustainable third-party reverse logistics providers (S3PRLPs) becomes an increasingly considerable issue for industries seeking improved customer service, cost reduction and sustainability perspectives. However, the assessment and selection of right S3PRLP is a complex uncertain decision-making problem due to involvement of numerous conflicting attributes, imprecise human mind and lack of information. Recently, Fermatean fuzzy set (FFS) has been recognized as one of the suitable tools to tackle the uncertain and inaccurate information. In this paper, we introduce a hybrid methodology based on CRITIC and EDAS methods with Fermatean fuzzy sets (FFSs) to solve the S3PRLP selection problem in which the attributes and decision makers’ weights are completely unknown. In this framework, CRITIC approach is applied to calculate the attribute weight and EDAS method is used to evaluate the priority order of S3PRLP options. To do this, a new improved generalized score function (IGSF) is developed with its elegant properties. Also, a formula is discussed to calculate the decision makers’ weights based on the developed IGSF. Next, developed framework is applied to assess a case study of S3PRLP selection problem with Fermatean fuzzy information, which elucidates the usefulness and practicality of the proposed method. Finally, comparative study is implemented to show the strength of introduced framework with extant approaches. The outcomes of the work confirm that the introduced approach is more feasible and well-consistent with the other extant approaches.

93 citations


Journal ArticleDOI
TL;DR: Though the current intervention packages offer satisfactory results, a better treatment modality is proposed in this report and a study with 100 KOA affected subjects showed a significant improvement in their condition.
Abstract: Knee osteoarthritis (KOA) is one among the major degenerative and inflammatory disease that affects a large number of people worldwide especially the female population. As no proper remedy or treatment is available on date, people prefer non-invasive treatments to overcome this malady. Physiotherapy approaches have been recognized as beneficial to the affected. Though the current intervention packages offer satisfactory results, a better treatment modality is proposed in this report. A study with 100 KOA affected subjects who received knee isometrics and ankle rotator strengthening exercises for 4 weeks showed a significant improvement in their condition. The relief status has been assessed by measuring the cartilage volume by MRI scan.

92 citations


Journal ArticleDOI
TL;DR: This work presents and assesses the power of various volumetric, sentiment, and social network approaches to predict crucial decisions from online social media platforms and suggests some future directions in respective election prediction using social media content.
Abstract: This work presents and assesses the power of various volumetric, sentiment, and social network approaches to predict crucial decisions from online social media platforms. The views of individuals play a vital role in the discovery of some critical decisions. Social media has become a well-known platform for voicing the feelings of the general population around the globe for almost decades. Sentiment analysis or opinion mining is a method that is used to mine the general population’s views or feelings. In this respect, the forecasting of election results is an application of sentiment analysis aimed at predicting the outcomes of an ongoing election by gauging the mood of the public through social media. This survey paper outlines the evaluation of sentiment analysis techniques and tries to edify the contribution of the researchers to predict election results through social media content. This paper also gives a review of studies that tried to infer the political stance of online users using social media platforms such as Facebook and Twitter. Besides, this paper highlights the research challenges associated with predicting election results and open issues related to sentiment analysis. Further, this paper also suggests some future directions in respective election prediction using social media content.

Journal ArticleDOI
TL;DR: A general architecture has been proposed for predicting the disease in the healthcare industry using improved SVM-Radial bias kernel method and this system has compared with other machine learning techniques such with accuracy, misclassification rate, precision, sensitivity and specificity.
Abstract: In this digital world, data is an asset, and enormous data was generating in all the fields. Data in the healthcare industry consists of patient information and disease-related information. This medical data and machine learning techniques will help us to analyse a large amount of data to find out the hidden patterns in the disease, to provide personalised treatment for the patient and also used to predict the disease. In this work, a general architecture has proposed for predicting the disease in the healthcare industry. This system was experimented using with reduced set features of Chronic Kidney Disease, Diabetes and Heart Disease dataset using improved SVM-Radial bias kernel method, and also this system has compared with other machine learning techniques such as SVM-Linear, SVM-Polynomial, Random forest and Decision tree in R studio. The performance of all these machine learning algorithms has evaluated with accuracy, misclassification rate, precision, sensitivity and specificity. From the experiment results, improved SVM-Radial bias kernel technique produces accuracy as 98.3%, 98.7% and 89.9% in Chronic Kidney Disease, Diabetes and Heart Disease dataset respectively.

Journal ArticleDOI
TL;DR: Simulation results demonstrate the effectiveness and advantages of the Balanced CA-SVM model used in machine learning and further promises the scope for improvement as more and more relevant attributes can be used in predicting the dependent variables.
Abstract: The rainfall prediction is important for metrological department as it closely associated with our environment and human life. An accuracy of rainfall prediction has great important for countries like India whose economy is dependent on agriculture. Because of dynamic nature of atmosphere, statistical techniques fail to predict rainfall information. The process of support vector machine (SVM) is to find an optimal boundary also known as hyper plane in which separates the samples (examples in a dataset) of different classes by a maximum margin. The proposed model uses the dynamic integrated model for exploring and learning large amount of data set. Balanced communication-avoiding support vector machine (CA-SVM) prediction model is proposed to achieve better performance and accuracy with limited number of iteration without any error. The rain fall dataset is used for performance evaluation. The proposed model starts with independent sample to the integrated samples without any collision in prediction. The proposed algorithm achieves 89% of accuracy when compared to the existing algorithms. The simulations demonstrate that prediction models indicate that the performance of the proposed algorithm Balanced CA-SVM has much better accuracy than the local learning model based on a set of experimental data if other things are equal. On the other hand, simulation results demonstrate the effectiveness and advantages of the Balanced CA-SVM model used in machine learning and further promises the scope for improvement as more and more relevant attributes can be used in predicting the dependent variables.

Journal ArticleDOI
TL;DR: A novel distributed ensemble design based IDS using Fog computing, which combines k-nearest neighbors, XGBoost, and Gaussian naive Bayes as first-level individual learners and the prediction results obtained from first level is used by Random Forest for final classification.
Abstract: With the development of internet of things (IoT), capabilities of computing, networking infrastructure, storage of data and management have come very close to the edge of networks. This has accelerated the necessity of Fog computing paradigm. Due to availability of Internet, most of our business operations are integrated with IoT platform. Fog computing has enhanced the strategy of collecting and processing, huge amount of data. On the other hand, attacks and malicious activities has adverse consequences on the development of IoT, Fog, and cloud computing. This has led to development of many security models using fog computing to protect IoT network. Therefore, for dynamic and highly scalable IoT environment, a distributed architecture based intrusion detection system (IDS) is required that can distribute the existing centralized computing to local fog nodes and can efficiently detect modern IoT attacks. This paper proposes a novel distributed ensemble design based IDS using Fog computing, which combines k-nearest neighbors, XGBoost, and Gaussian naive Bayes as first-level individual learners. At second-level, the prediction results obtained from first level is used by Random Forest for final classification. Most of the existing proposals are tested using KDD99 or NSL-KDD dataset. However, these datasets are obsolete and lack modern IoT-based attacks. In this paper, UNSW-NB15 and actual IoT-based dataset namely, DS2OS are used for verifying the effectiveness of the proposed system. The experimental result revealed that the proposed distributed IDS with UNSW-NB15 can achieve higher detection rate upto 71.18% for Backdoor, 68.98% for Analysis, 92.25% for Reconnaissance and 85.42% for DoS attacks. Similarly, with DS2OS dataset, detection rate is upto 99.99% for most of the attack vectors.

Journal ArticleDOI
TL;DR: This paper has proposed a secure wireless mechanism using Blockchain technology that stores extorted proceedings of each record into number of blocks and simulation results of proposed blockchain mechanism are executed against various security transmission processes to appraise the authenticity of IoT devices.
Abstract: Industrial IoT in the advancement of organizations consigns to the next level in order to trace and manage every single activity of their entities. However, the interdependence, implementation and communication among such wireless devices also known as IoT devices that lead to various secrecy and personnel concerns. Even though the use of smart sensors in industries assists and reduces human efforts with the increased quality besides of enhanced production cost. Several attacks may further encountered by various attackers by hacking several sensors/objects/devices activities. In this paper, in order to preserve transparency and secure each and every activity of smart sensors, we have proposed a secure wireless mechanism using Blockchain technology that stores extorted proceedings of each record into number of blocks. Further, the simulation results of proposed blockchain mechanism are executed against various security transmission processes. In addition, the simulated results are scrutinized besides traditional mechanism and verified over certain metrics such as Probability of attack success, ease of attack detection by the system, falsification attack, authentication delay and probabilistic scenarios to appraise the authenticity of IoT devices.

Journal ArticleDOI
TL;DR: A combination of convolutional neural network features with support vector machine (SVM) for classification of the medical images, which achieves an overall classification accuracy better than the state-of-the-art method.
Abstract: Automated tumor characterization has a prominent role in the computer-aided diagnosis (CAD) system for the human brain. Despite being a well-studied topic, CAD of brain tumors poses severe challenges in some specific aspects. One such challenging problem is the category-based classification of brain tumors among glioma, meningioma, and pituitary tumors using magnetic resonance imaging (MRI) images. The emergence of deep learning and machine learning algorithms have addressed image classification tasks with promising results. But an associated limitation with the medical image classification is the small sizes of medical image databases. This limitation, in turn, limits the availability of medical images for training deep neural networks. To mitigate this challenge, we adopt a combination of convolutional neural network (CNN) features with support vector machine (SVM) for classification of the medical images. The fully automated system is evaluated using Figshare open dataset containing MRI images for the three types of brain tumors. CNN is designed to extract features from brain MRI images. For enhanced performance, a multiclass SVM is used with CNN features. Testing and evaluation of the integrated system followed a fivefold cross-validation procedure. The proposed model attained an overall classification accuracy of 95.82%, better than the state-of-the-art method. Extensive experiments are performed on other MRI datasets for the brain to ascertain the improved performance of the proposed system. When the amount of available training data is small, the SVM classifier is observed to perform better than the softmax classifier for the CNN features. Compared to transfer learning-based classification, the adopted strategy of CNN-SVM has lesser computations and memory requirements.

Journal ArticleDOI
TL;DR: This article gives a thorough overview of recent literature regarding neural networks usage in intrusion detection system area, including surveys and new method proposals.
Abstract: In recent years, advancements in the field of the artificial intelligence (AI) gained a huge momentum due to the worldwide appliance of this technology by the industry. One of the crucial areas of AI are neural networks (NN), which enable commercial utilization of functionalities previously not accessible by usage of computers. Intrusion detection system (IDS) presents one of the domains in which neural networks are widely tested for improving overall computer network security and data privacy. This article gives a thorough overview of recent literature regarding neural networks usage in intrusion detection system area, including surveys and new method proposals. Short tutorial descriptions of neural network architectures, intrusion detection system types and training datasets are also provided.

Journal ArticleDOI
TL;DR: In this paper, blockchain smart contracts are designed so as to provide a regulated solution to the need of patients, physicians, and health service providers to build a smart e-health system.
Abstract: Technology has widespread health records that may have been digitized into electronic health records (EHRs). In the present scenario, patients ought to have their health records across multiple hospitals in the geographical area. In such case accessibility of health data from one provider to another at the right time remains a major challenge, especially when patients are in a critical condition having access to fragmented health records from multiple sources into a single chain. In this paper, blockchain smart contracts are designed so as to provide a regulated solution to the need of patients, physicians, and health service providers. The proposed system aims to exchange health information on a blockchain platform to build a smart e-health system. The proposed system launches health models namely immutable patient log creation with Modified Merkle Tree data structure for secure storage and rapid access of health records, update medical records, health information exchange between different providers, and viewership contracts on the peer to peer blockchain network. In this system, blockchain is a clinical data repository that provides patients a complete, distributed ledger record containing records of all the events and seamless access to their electronic health records through healthcare providers. As an important feature, this system provides high security and integrity through cryptographic hash functions. The work has been carried out on a number of trials to check the effectiveness of the proposed system. The qualitative and quantitative metrics of the proposed system has been measured to evaluate the performance of resources, transactions per second, and the latency of a transaction.

Journal ArticleDOI
TL;DR: In this article, the authors present a literature review and explore how 5G can enable or streamline intelligent automation in different industries and highlight its role in shaping the age of unlimited connectivity, intelligent automation, and industry digitization.
Abstract: The mobile industry is developing and preparing to deploy the fifth-generation (5G) networks. The evolving 5G networks are becoming more readily available as a significant driver of the growth of IoT and other intelligent automation applications. 5G's lightning-fast connection and low-latency are needed for advances in intelligent automation-the Internet of Things (IoT), Artificial Intelligence (AI), driverless cars, digital reality, blockchain, and future breakthroughs we haven't even thought of yet. The advent of 5G is more than just a generational step; it opens a new world of possibilities for every tech industry. The purpose of this paper is to do a literature review and explore how 5G can enable or streamline intelligent automation in different industries. This paper reviews the evolution and development of various generations of mobile wireless technology underscores the importance of 5G revolutionary networks, reviews its key enabling technologies, examines its trends and challenges, explores its applications in different manufacturing industries, and highlights its role in shaping the age of unlimited connectivity, intelligent automation, and industry digitization.

Journal ArticleDOI
TL;DR: A multi-objective path planning algorithm which consists of optimizing a path by the hybridization of the Grey Wolf optimizer-particle swarm optimization algorithm, it minimizes the path distance and smooths the path and proves that it overcomes the shortcomings of other conventional techniques.
Abstract: As path planning is an NP-hard problem it can be solved by multi-objective algorithms. In this article, we propose a multi-objective path planning algorithm which consists of three steps: (1) the first step consists of optimizing a path by the hybridization of the Grey Wolf optimizer-particle swarm optimization algorithm, it minimizes the path distance and smooths the path. (2) the second step, all optimal and feasible points generated by PSO–GWO algorithm are integrated with Local Search technique to convert any infeasible point into feasible point solution, the last step (3) depends on collision avoidance and detection algorithm, where mobile robot detects the presence of an obstacle in its sensing circle and then avoid them using collision avoidance algorithm. The proposed method is further improved by adding the mutation operators by evolutionary, it further solves path safety, length, and smooths it further for a mobile robot. Different simulations have been performed under numerous environments to test the feasibility of the proposed algorithm and it is shown the algorithm produces a more feasible path with a short distance and thus proves that it overcomes the shortcomings of other conventional techniques.

Journal ArticleDOI
TL;DR: The comparative study of interpolation methods and mean variation can be resulted and used for the further implementation of spatial climatic condition and decision support system to the society.
Abstract: An ecological change on the planet the most flighty parameter is precipitation. It causes minor and real changes in the climatic changes. The time and space analyzed on the data and the spatial and temporal patterns are introduced in the required manner. For the unpredictable data to be predicted by using the spatial interpolation techniques. The major spatial interpolation methods are categorized based on the simple and complex mathematical modeling. Those studies involve interpolation techniques such as ordinary Krigging, inverse distance weighted (IDW), Spline, etc., Most of the techniques studied the hydrological modeling and deliver hydro mapping (Younghun Jung and Venkatesh Merwade 2015). Various techniques are used for the hydrological mapping. This study involved comparing the statistical interpolation with precipitation and temperature. The cumulative values in the dataset taken from the year 2001 to the year 2013 used in the study. The monthly and the yearly mean square value calculated using the spatial interpolation techniques (Dhamodarn and Shruthi 2016). The main objective of the study is to find the topographical features and provides high-resolution climate maps using mathematical modeling. In the result of comparing the above interpolation methods in the result will produce the high-resolution climate maps and generate the geographical patterns. Based on the study will produce climate change in the environment like disaster, flood, landslide. The comparative study of interpolation methods and mean variation can be resulted and used for the further implementation of spatial climatic condition and decision support system to the society. The area has undergone the study is Adyar river, situated in latitude 13.0012° N, Longitude 80.2565° E. Inverse Distance Weighted interpolation methods are analyzed by using the Geostatistical tool ArcGIS.

Journal ArticleDOI
TL;DR: This work investigates the rumor detection problem by exploring different Deep Learning models with emphasis on considering the contextual information in both directions: forward and backward, in a given text, effectively classifying the tweet into rumors and non-rumors.
Abstract: The widespread propagation of numerous rumors and fake news have seriously threatened the credibility of microblogs. Previous works often focused on maintaining the previous state without considering the subsequent context information. Furthermore, most of the early works have used classical feature representation schemes followed by a classifier. We investigate the rumor detection problem by exploring different Deep Learning models with emphasis on considering the contextual information in both directions: forward and backward, in a given text. The proposed system is based on Bidirectional Long Short-Term Memory with Convolutional Neural Network, effectively classifying the tweet into rumors and non-rumors. Experimental results show that the proposed method outperformed the baseline methods with 86.12% accuracy. Furthermore, the statistical analysis also shows the effectiveness of the proposed model than the comparing methods.

Journal ArticleDOI
TL;DR: The proposed CNN model can predict COVID-19 patients with high accuracy and can help automate screening of the patients for CO VID-19 with minimal contact, especially areas where the influx of patients can not be treated by the available medical staff.
Abstract: COVID-19 pandemic is widely spreading over the entire world and has established significant community spread. Fostering a prediction system can help prepare the officials to respond properly and quickly. Medical imaging like X-ray and computed tomography (CT) can play an important role in the early prediction of COVID-19 patients that will help the timely treatment of the patients. The x-ray images from COVID-19 patients reveal the pneumonia infections that can be used to identify the patients of COVID-19. This study presents the use of Convolutional Neural Network (CNN) that extracts the features from chest x-ray images for the prediction. Three filters are applied to get the edges from the images that help to get the desired segmented target with the infected area of the x-ray. To cope with the smaller size of the training dataset, Keras' ImageDataGenerator class is used to generate ten thousand augmented images. Classification is performed with two, three, and four classes where the four-class problem has X-ray images from COVID-19, normal people, virus pneumonia, and bacterial pneumonia. Results demonstrate that the proposed CNN model can predict COVID-19 patients with high accuracy. It can help automate screening of the patients for COVID-19 with minimal contact, especially areas where the influx of patients can not be treated by the available medical staff. The performance comparison of the proposed approach with VGG16 and AlexNet shows that classification results for two and four classes are competitive and identical for three-class classification.

Journal ArticleDOI
TL;DR: In this article, the authors used deep convolutional neural network, VGG19, and various handcrafted feature extraction methods, i.e., SIFT, SURF, ORB, and Shi-Tomasi corner detector algorithm.
Abstract: Image classification is getting more attention in the area of computer vision. During the past few years, a lot of research has been done on image classification using classical machine learning and deep learning techniques. Presently, deep learning-based techniques have given stupendous results. The performance of a classification system depends on the quality of features extracted from an image. The better is the quality of extracted features, the more the accuracy will be. Although, numerous deep learning-based methods have shown enormous performance in image classification, still due to various challenges deep learning methods are not able to extract all the important information from the image. This results in a reduction in overall classification accuracy. The goal of the present research is to improve the image classification performance by combining the deep features extracted using popular deep convolutional neural network, VGG19, and various handcrafted feature extraction methods, i.e., SIFT, SURF, ORB, and Shi-Tomasi corner detector algorithm. Further, the extracted features from these methods are classified using various machine learning classification methods, i.e., Gaussian Naive Bayes, Decision Tree, Random Forest, and eXtreme Gradient Boosting (XGBClassifier) classifier. The experiment is carried out on a benchmark dataset Caltech-101. The experimental results indicate that Random Forest using the combined features give 93.73% accuracy and outperforms other classifiers and methods proposed by other authors. The paper concludes that a single feature extractor whether shallow or deep is not enough to achieve satisfactory results. So, a combined approach using deep learning features and traditional handcrafted features is better for image classification.

Journal ArticleDOI
TL;DR: A composite deep neural network architecture with gated-attention mechanism for automated diagnosis of diabetic retinopathy using feature descriptors obtained from multiple pre-trained deep Convolutional Neural Networks (CNNs) to represent color fundus retinal images.
Abstract: Diabetic Retinopathy (DR) is a micro vascular complication caused by long-term diabetes mellitus. Unidentified diabetic retinopathy leads to permanent blindness. Early identification of this disease requires frequent complex diagnostic procedure which is expensive and time consuming. In this article, we propose a composite deep neural network architecture with gated-attention mechanism for automated diagnosis of diabetic retinopathy. The feature descriptors obtained from multiple pre-trained deep Convolutional Neural Networks (CNNs) are used to represent color fundus retinal images. Spatial pooling methods are introduced to get the reduced versions of these representations without loosing much information. The proposed composite DNN learns independently from each of these reduced representations through different channels and contributes to improving the model generalization. In addition, model also includes gated attention blocks which allows the model to emphasize more on lesion portions of the retinal images while reduced attention to the non-lesion regions. Our experiments on APTOS-2019 Kaggle blindness detection challenge reveal that, the proposed approach leads to improved performance when compared to the existing best models. Our empirical studies also reveal that, the proposed approach leads to more generalised predictions with multi-modal representations when compared to those of uni-modal representations. The proposed composite deep neural network model recorded an accuracy of 82.54% ( $$\uparrow $$ 2%), and a Kappa score of 79 ( $$\uparrow 9$$ points) for diabetic retinopathy severity level prediction.

Journal ArticleDOI
TL;DR: In order to solve the issue mismatching and structure disconnecting in exemplar-based image inpainting, an image completion algorithm based on improved total variation minimization method had been proposed in the paper, refer as ETVM.
Abstract: In order to solve the issue mismatching and structure disconnecting in exemplar-based image inpainting, an image completion algorithm based on improved total variation minimization method had been proposed in the paper, refer as ETVM. The structure of image had been extracted using improved total variation minimization method, and the known information of image is sufficiently used by existing methods. The robust filling mechanism can be achieved according to the direction of image structure and it has less noise than original image. The priority term had been redefined to eliminate the product effect and ensure data term had always effective. The priority of repairing patch and the best matching patch are determined by the similarity of the known information and the consistency of the unknown information in the repairing patch. The comparisons with cognitive computing image algorithms had been shown that the proposed method can ensure better selection of candidate image pixel to fill with, and it is achieved better global coherence of image completion than others. The inpainting results of noisy images show that the proposed method has good robustness and can also get good inpainting results for noisy images.

Journal ArticleDOI
TL;DR: The proposed algorithm improves the classification efficiency and reduces the error rates, and calculates the impact of each object from the rules based on how fuzzy rules are generated.
Abstract: The association rule based classification is imperative in the disease prediction owing to its high predictability. To deal with the sensitive data, we propose an algorithm using fuzzy inference set. The association rule mining is improved further by generating an associative rules for each item of the data set. The ranking of the item in the data set is based on the information mass value estimated. The mass value represents the depth of the item in the data set and its class. Selection of the certain item set is done based on the mass value of different associated items. According to the associative items selected, the association rule mining is performed. For each association rule generated, this method calculates the impact of each object from the rules based on how fuzzy rules are generated. Fuzzy impact rules indicate symptoms and diagnostic labels. A class of disease posses disease influence measure that predicts each class of disease has changed. The proposed algorithm improves the classification efficiency and reduces the error rates.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a cloud-based object tracking and behavior identification system (COTBIS) that can incorporate the edge computing capability framework in the gateway level for smart healthcare applications, remote patient and elderly people monitoring require a robust response and alarm alerts from surveillance systems within the available bandwidth.
Abstract: Managing distributed smart surveillance system is identified as a major challenging issue due to its comprehensive aggregation and analysis of video information on the cloud. In smart healthcare applications, remote patient and elderly people monitoring require a robust response and alarm alerts from surveillance systems within the available bandwidth. In order to make a robust video surveillance system, there is a need for fast response and fast data analytics among connected devices deployed in a real-time cloud environment. Therefore, the proposed research work introduces the Cloud-based Object Tracking and Behavior Identification System (COTBIS) that can incorporate the edge computing capability framework in the gateway level. It is an emerging research area of the Internet of Things (IoT) that can bring robustness and intelligence in distributed video surveillance systems by minimizing network bandwidth and response time between wireless cameras and cloud servers. Further improvements are made by incorporating background subtraction and deep convolution neural network algorithms on moving objects to detect and classify abnormal falling activity monitoring using rank polling. Therefore, the proposed IoT-based smart healthcare video surveillance system using edge computing reduces the network bandwidth and response time and maximizes the fall behavior prediction accuracy significantly comparing to existing cloud-based video surveillance systems.

Journal ArticleDOI
TL;DR: This paper implements Chi-Square, Information Gain (IG), and Recursive Feature Elimination (RFE) feature selection techniques with ML classifiers namely Support Vector Machine, Naïve Bayes, Decision Tree Classifier, Random Forest Classifiers, k-nearest neighbours, Logistic Regression, and Artificial Neural Networks.
Abstract: The goal of securing a network is to protect the information flowing through the network and to ensure the security of intellectual as well as sensitive data for the underlying application. To accomplish this goal, security mechanism such as Intrusion Detection System (IDS) is used, that analyzes the network traffic and extract useful information for inspection. It identifies various patterns and signatures from the data and use them as features for attack detection and classification. Various Machine Learning (ML) techniques are used to design IDS for attack detection and classification. All the features captured from the network packets do not contribute in detecting or classifying attack. Therefore, the objective of our research work is to study the effect of various feature selection techniques on the performance of IDS. Feature selection techniques select relevant features and group them into subsets. This paper implements Chi-Square, Information Gain (IG), and Recursive Feature Elimination (RFE) feature selection techniques with ML classifiers namely Support Vector Machine, Naive Bayes, Decision Tree Classifier, Random Forest Classifier, k-nearest neighbours, Logistic Regression, and Artificial Neural Networks. The methods are experimented on NSL-KDD dataset and comparative analysis of results is presented.

Journal ArticleDOI
TL;DR: This work presents an IoT based home monitoring system, which deploys two ESP32 cameras for video sensing, and ensures secure data storage, such that the data leakage can be prevented and the consistency is maintained.
Abstract: Internet of things (IoT) has become an integral part of today’s technological revolution, which enhances the people’s quality of life. The IoT paradigm makes the world smarter and is employed in numerous real-time applications ranging from healthcare to vehicular networks. Surveillance systems are yet another important application of IoT and this work presents an IoT based home monitoring system, which deploys two ESP32 cameras for video sensing. The sensed data is not advisable to get stored as such, as the intruders may predict the usual events or frequent visitors to home. In order to ensure privacy, the cloud data storage must be secured and the purpose of this article is to ensure secure data storage, such that the data leakage can be prevented and the consistency is maintained. The data to be stored is secured by enforcing strict encryption scheme with keccak and chaotic sequence by considering the frames. The encrypted data is then stored in the cloud server. The performance of the proposed work is evaluated with respect to several performance measures and the work is proven to be efficient.