scispace - formally typeset
Search or ask a question

Showing papers in "Scalable Computing: Practice and Experience in 2020"


Journal ArticleDOI
TL;DR: From the analysis of previous work in the IoV network, the utilization of artificial intelligence and machine learning concept is a beneficial step toward the future of IoV model.
Abstract: The new age of the Internet of Things (IoT) is motivating the advancement of traditional Vehicular Ad-Hoc Networks (VANETs) into the Internet of Vehicles (IoV). This paper is an overview of smart and secure communications to reduce traffic congestion using IoT based VANETs, known as IoV networks. Studies and observations made in this paper suggest that the practice of combining IoT and VANET for a secure combination has rarely practiced. IoV uses real-time data communication between vehicles to everything (V2X) using wireless communication devices based on fog/edge computing; therefore, it has considered as an application of Cyber-physical systems (CPS). Various modes of V2X communication with their connecting technologies also discussed. This paper delivers a detailed introduction to the Internet of Vehicles (IoV) with current applications, discusses the architecture of IoV based on currently existing communication technologies and routing protocols, presenting different issues in detail, provides several open research challenges and the trade-off between security and privacy in the area of IoV has reviewed. From the analysis of previous work in the IoV network, we concluded the utilization of artificial intelligence and machine learning concept is a beneficial step toward the future of IoV model.

18 citations


Journal ArticleDOI
TL;DR: This review article provides a detailed review of 52 research papers presenting the suggested routing protocols based on the contentbased, clustering- based, fuzzy-based, Routing Protocol for Low power and Lossy Networks, tree-based and so on.
Abstract: Internet of Things (IoT) is with a perception of ‘anything’, ‘anywhere’ and provides the interconnection among devices with a remarkable scale and speed. The prevalent intention of IoT is the datatransmission through the internet without the mediation of humans. An efficient routing protocol must be included in the IoT network for the accomplishment of its objectives and securing data transmission. Accordingly, the survey presents various routing protocols for secure data communication in IoT for providing a clear vision as the major issue in the IoT networks is energy consumption. Therefore, there is a need for devising an effective routing scheme to provide superior performance over the other existing schemes in terms of energy consumption. Thus, this review article provides a detailed review of 52 research papers presenting the suggested routing protocols based on the contentbased, clustering-based, fuzzy-based, Routing Protocol for Low power and Lossy Networks, tree-based and so on. Also, a detailed analysis and discussion are made by concerning the parameters, simulation tool, and year of publication, network size, evaluation metrics, and utilized protocols. Finally, the research gaps and issues of various conventional routing protocols are presentedfor extending the researchers towards a better contribution of routing protocol for the secure IoT routing.

14 citations


Journal ArticleDOI
TL;DR: The task model presented here in this paper solves scheduling problem and this is done at the server level and not on the device level, and it performs better than some common algorithms such as active monitoring, weighted round robin and throttled load balancer.
Abstract: Cloud computing helps in providing the applications with a few number of resources that are used to unload the tasks. But there are certain applications like coordinated lane change assistance which are helpful in cars that connects to internet has strict time constraints, and it may not be possible to get the job done just by unloading the tasks to the cloud. Fog computing helps in reducing the latency i.e the computation is now done in local fog servers instead of remote datacentres and these fog servers are connected to the nearby distance to clients. To achieve better timing performance in fog computing load balancing in these fog servers is to be performed in an efficient manner. The challenges in the proposed application includes the number of tasks are high, client mobility and heterogeneous nature of fog servers. We use mobility patterns of connected cars and load balancing is done periodically among fog servers. The task model presented here in this paper solves scheduling problem and this is done at the server level and not on the device level. And at last, we present an optimization problem formulation for balancing the load and for reducing the misses in deadline, also the time required for running the task in these cars will be minimized with the help of fog computing. It also performs better than some common algorithms such as active monitoring, weighted round robin and throttled load balancer.

12 citations


Journal ArticleDOI
TL;DR: The proposed approach uses about 100 percent of users’ participation in the form of activities during navigation of the web site, and its performance is better than all other baseline systems in all aspects.
Abstract: The Internet is changing the method of selling and purchasing items. Nowadays online trading replaces offline trading. The items offered by the online system can influence the nature of buying customers. The recommendation system is one of the basic tools to provide such an environment. Several techniques are used to design and implement the recommendation system. Every recommendation system passes from two phases similarity computation among the users or items and correlation between target user and items. Collaborative filtering is a common technique used for designing such a system. The proposed system uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. Almost every existing recommendation system is based on one of feature, review, rating, and popularity of the items in which users’ involvement is very less or none. The proposed approach uses about 100 percent of users’ participation in the form of activities during navigation of the web site. Thus, the system expects under the users’ interest that is beneficial for both seller and buyer. The proposed system relates the category of items, not just specific items that may be interested in the users. We see the effectiveness of this approach in comparison with baseline methods in the area of recommendation system using three parameters precision, recall, and NDCG through online and offline evaluation studies with user data, and its performance is better than all other baseline systems in all aspects.

12 citations


Journal ArticleDOI
TL;DR: This paper is proposing to use generative adversarial networks for image steganography which include discriminative models to identify Steganography image during training stage and that helps to reduce the error rate later during Steganalysis.
Abstract: Image steganography aims at hiding information in a cover medium in an imperceptible way. While traditionalsteganography methods used invisible inks and microdots, digital world started using images and video files for hiding the secret content in it. Steganalysis is a closely related field for detecting hidden information in these multimedia files. There are many steganography algorithms implemented and tested but most of them fail during Steganalysis. To overcome this issue, in this paper, we are proposing to use generative adversarial networks for image steganography which include discriminative models to identify steganography image during training stage and that helps us to reduce the error rate later during Steganalysis. The proposed modified cycle Generative Adversarial Networks (Mod Cycle GAN) algorithm is tested using the USC-SIPI database and the experimentation results were better when compared with the algorithms in the literature. Because the discriminator block evaluates the image authenticity, we could modify the embedding algorithm until the discriminator could not identify the change made and thereby increasing the robustness.

8 citations


Journal ArticleDOI
TL;DR: The advantages of deep learning approaches that can be brought by developing a framework that can enhance prediction of heart related diseases using ECG are looked into.
Abstract: The cardiovascular related diseases can however be controlled through earlier detection as well as risk evaluation and prediction. In this paper the application of deep learning methods for CVD diagnosis using ECG is addressed and also discussed the deep learning with Python. A detailed analysis of related articles has been conducted. The results indicate that convolutional neural networks are the most widely used deep learning technique in the CVD diagnosis. This research paper looks into the advantages of deep learning approaches that can be brought by developing a framework that can enhance prediction of heart related diseases using ECG.

7 citations


Journal ArticleDOI
TL;DR: A deterministic novel energy efficient fuzzy logic based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads and results unveil better performance.
Abstract: The uttermost requirement of the wireless sensor network is prolonged lifetime. Unequal energy degeneration in clustered sensor nodes lead to the premature death of sensor nodes resulting in a lessened lifetime. Most of the proposed protocols primarily choose cluster head on the basis of a random number, which is somewhat discriminating as some nodes which are eligible candidates for cluster head role may be skipped because of this randomness. To rule out this issue, we propose a deterministic novel energy efficient fuzzy logic based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.

7 citations


Journal ArticleDOI
TL;DR: The mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources are described and implemented using LSTM.
Abstract: Internet of Things (IoT) and cloud computing are the expertise captivating the technology. The most astonishing thing is their interdependence. IoT deals with the production of an additional amount of information that requires transmission of data, storage, and huge infrastructural processing power, posing a solemn delinquent. This is where cloud computing fits into the scenario. Cloud computing can be treated as the utility factor nowadays and can be used by pay as you go manner. As a cloud is a multi-tenant approach, and the resources will be used by multiple users. The cloud resources are required to be monitored, maintained, and configured and set-up as per the need of the end-users. This paper describes the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The resource management system coordinates the IT resources among the cloud provider and the end users; accordingly, multiple instances can be created and managed as per the demand and availability of the support in terms of resources. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. The scope of this research paper is to manage the Cloud Computing resources during the peak time and avoid the conditions of the over and under-provisioning proactively.

7 citations


Journal ArticleDOI
TL;DR: An overview of security landscape of Fog computing, challenges, and, existing solutions is presented and Blockchain, a decentralized distributed technology, is presented as one of the solutions for authentication issues in IoT.
Abstract: As the IoT is moving out of its early stages, it is emerging as an area of future internet The evolving communication paradigm among cloud servers, Fog nodes and IoT devices are establishing a multilevel communication infrastructure Fog provides a platform for IoT along with other services like networking, storage and computing With the tremendous expansion of IoT, security threats also arise These security hazards cannot be addressed by mere dependence on cloud model In this paper we present an overview of security landscape of Fog computing, challenges, and, existing solutions We outline major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes Furthermore this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT We tried to discuss the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology At last we present the Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area

6 citations


Journal ArticleDOI
E Murali1, K Meena
TL;DR: A computerized framework that can distinguish brain tumor and investigate the diverse highlights of the tumor is depicted that is helped by image processing based technique that gives improved precision rate of the cerebrum tumor location along with the computation of tumor measure.
Abstract: This paper depicts a computerized framework that can distinguish brain tumor and investigate the diverse highlights of the tumor. Brain tumor segmentation means to isolated the unique tumor tissues, for example, active cells, edema and necrotic center from ordinary mind tissues of WM, GM, and CSF. However, manual segmentation in magnetic resonance data is a timeconsuming task. We present a method of automatic tumor segmentation in magnetic resonance images which consists of several steps. The recommended framework is helped by image processing based technique that gives improved precision rate of the cerebrum tumor location along with the computation of tumor measure. In this paper, the location of brain tumor from MRI is recognized utilizing adaptive thresholding with a level set and a morphological procedure with histogram. Automatic brain tumor stage is performed by using ensemble classification. Such phase classifies brain images into tumor and non-tumors using Feed Forwarded Artificial neural network based classifier. For test investigation, continuous MRI images gathered from 200 people are utilized. The rate of fruitful discovery through the proposed procedure is 97.32 percentage accurate.

6 citations


Journal ArticleDOI
TL;DR: A methodology is proposed to optimize the supplier logistics using clustering algorithm that can help the buyers to select the cost effective supplier for their business requirements.
Abstract: Today’s business environment, survival and making profit in market are the prime requirement for any enterprise due to competitive environment. Innovation and staying updated are commonly identified two key parameters for achieving success and profit in business. Considerably supply chain management is also accountable for profit. As a measure to maximize the profit, supply chain process is to be streamlined and optimized. Appropriate grouping of various suppliers for the benefit of shipment cost reduction is proposed. Data relating to appropriate attributes of supplier logistics are collected. A methodology is proposed to optimize the supplier logistics using clustering algorithm. In the proposed methodology data preprocessing, clustering and validation process have been carried out. The Z-score normalization is used to normalize the data, which converts the data to uniform scales for improving the clustering performance. By employing Hierarchical and K-means clustering algorithms the supplier logistics are grouped and performance of each method is evaluated and presented. The supplier logistics data from different country is experimented. Outcome of this work can help the buyers to select the cost effective supplier for their business requirements.


Journal ArticleDOI
TL;DR: The development of an algorithm, which focuses on the adaptive feedback cancellation, that improves the listening effort of the user and can be compared with other comprehensive analysis methods, to evaluate its standards.
Abstract: Many people are distracted from the normal life style, because of the hearing loss they have. Most of them do not use the hearing aids due to various discomforts in wearing it. The main and the foremost problem available in it is; the device introduces unpleasant whistling sounds, caused by the changing environmental noise, which is faced by the user daily. This paper describes the development of an algorithm, which focuses on the adaptive feedback cancellation, that improves the listening effort of the user. The genetic algorithm is one of the computational technique, that is used in enhancing the above features. The performance can also be compared with other comprehensive analysis methods, to evaluate its standards.

Journal ArticleDOI
TL;DR: The results show that the proposed consolidation method performs better than existing algorithms in terms of energy efficiency, energy consumption, SLA violation rate, and number of VM migrations.
Abstract: The unbalanced usage of resources in cloud data centers cause an enormous amount of power consumption. The Virtual Machine (VM) consolidation shuts the underutilized hosts and makes the overloaded hosts as normally loaded hosts by selecting appropriate VMs from the hosts and migrates them to other hosts in such a way to reduce the energy consumption and to improve physical resource utilization. Efficient method is needed for VM selection and destination hosts selection (VM placement). In this paper, a CPU-Memory aware VM placement algorithm is proposed for selecting suitable destination host for migration. The VMs are selected using Fuzzy Soft Set (FSS) method VM selection algorithm. The proposed placement algorithm considers both CPU, Memory, and combination of CPU-Memory utilization of VMs on the source host. The proposed method is experimentally compared with several existing selection and placement algorithms and the results show that the proposed consolidation method performs better than existing algorithms in terms of energy efficiency, energy consumption, SLA violation rate, and number of VM migrations.

Journal ArticleDOI
TL;DR: A hybrid method is proposed, which combines a new warmup scheme with the linear-scaling stochastic gradient descent (SGD) algorithm to effectively improve the training accuracy and convergence rate, and which outperforms those existing methods on distributed training efficiency.
Abstract: The application of deep learning in industry often needs to train large-scale neural networks and use large-scale data sets. However, larger networks and larger data sets lead to longer training time, which hinders the research of algorithms and the progress of actual engineering development. Data-parallel distributed training is a commonly used solution, but it is still in the stage of technical exploration. In this paper, we study how to improve the training accuracy and speed of distributed training, and propose a distributed training strategy based on hybrid gradient computing. Specifically, in the gradient descent stage, we propose a hybrid method, which combines a new warmup scheme with the linear-scaling stochastic gradient descent (SGD) algorithm to effectively improve the training accuracy and convergence rate. At the same time, we adopt the mixed precision gradient computing. In the single-GPU gradient computing and inter-GPU gradient synchronization, we use the mixed numerical precision of single precision(FP32) and half precision(FP16), which not only improves the training speed of single-GPU, but also improves the speed of inter-GPU communication. Through the integration of various training strategies and system engineering implementation, we finished ResNet-50 training in 20 minutes on a cluster of 24 V100 GPUs, with 75.6% Top-1 accuracy, and 97.5% GPU scaling efficiency. In addition, this paper proposes a new criterion for the evaluation of the distributed training efficiency, that is, the actual average single-GPU training time, which can evaluate the improvement of training methods in a more reasonable manner than just the improved performance due to the increased number of GPUs. In terms of this criterion, our method outperforms those existing methods.

Journal ArticleDOI
TL;DR: Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies and cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities and the physical things.
Abstract: Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision


Journal ArticleDOI
TL;DR: The proposed system can be used to understand the whole translation process and can be employed as a tool for learning as well as teaching and embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant.
Abstract: Machine Translation is an area of Natural Language Processing which can replace the laborious task of manual translation. Sanskrit language is among the ancient Indo-Aryan languages. There are numerous works of art and literature in Sanskrit. It has also been a medium for creating treatise of philosophical work as well as works on logic, astronomy and mathematics. On the other hand, Hindi is the most prominent language of India. Moreover,it is among the most widely spoken languages across the world. This paper is an effort to bridge the language barrier between Hindi and Sanskrit language such that any text in Hindi can be translated to Sanskrit. The technique used for achieving the aforesaid objective is rule-based machine translation. The salient linguistic features of the two languages are used to perform the translation. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94% while the pragmatic analysis of 50 sentences produce an accuracy of around 86%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant.

Journal ArticleDOI
TL;DR: The proposed BSO-Stacked Autoencoder method achieves the maximal accuracy of 96.562%, the maximal sensitivity of 91.884%, and the maximal specificity of 99%, which indicates its superiority.
Abstract: One of the most-watched and a played sport is cricket, especially in South Asian countries. In cricket, the umpire has the power to make significant decisions about events in the field. With the growing increase in the utilization of technology in sports, this paper presents the umpire detection and classification by proposing an optimization algorithm. The overall procedure of the proposed approach involves three steps, like segmentation, feature extraction, and classification. At first, the video frames are extracted from the input cricket video, and the segmentation is performed based on the Viola-Jones algorithm. Once the segmentation is done, the feature extraction is carried out using Histogram of Oriented Gradients (HOG), and Fuzzy Local Gradient Patterns (Fuzzy LGP). Finally, the extracted features are given to the classification step. Here, the classification is done using the proposed Bird Swarm Optimization-based stacked autoencoder deep learning classifier (BSO-Stacked Autoencoders), that categories into umpire or others. The performance of the umpire detection and classification based on BSO-Stacked Autoencoders is evaluated based on sensitivity, specificity, and accuracy. The proposed BSO-Stacked Autoencoder method achieves the maximal accuracy of 96.562%, the maximal sensitivity of 91.884%, and the maximal specificity of 99%, which indicates its superiority.

Journal ArticleDOI
TL;DR: This work has developed a classification model of skin tumours in images using Deep Learning with a Convolutional Neural Network based on TensorFlow and Keras model and the results show that the accuracy was achieved.
Abstract: Skin cancer is a dangerous disease causing a high proportion of deaths around the world. Any diagnosis of cancer begins with a careful clinical examination, followed by a blood test and medical imaging examinations. Medical imaging is today one of the main tools for diagnosing cancers. It allows us to obtain precise images, internal organs and thus to visualize the possible tumours that they present. These images provide information on the location, size and evolutionary stage of tumour lesions. Automatic classification of skin tumours using images is an important task that can help doctors, laboratory technologists, and researchers to make the best decisions. This work has developed a classification model of skin tumours in images using Deep Learning with a Convolutional Neural Network based on TensorFlow and Keras model. This architecture is tested in the HAM10000 dataset consists of 10,015 dermatoscopic images. The results of the classification of the experiment show that the accuracy was achieved by our model, which is in order of 94.06% in the validation set and 93.93% in the test set.

Journal ArticleDOI
TL;DR: This research identified the dependencies of service latency and service time of incoming network packets on load, as well as equations for finding the volume of a switch’s buffer memory with an acceptable probability for message loss.
Abstract: Implementing the almost limitless possibilities of a software-defined network requires additional study of its infrastructure level and assessment of the telecommunications aspect. The aim of this study is to develop an analytical model for analyzing the main quality indicators of modern network switches. Based on the general theory of queuing systems and networks, generated functions and Laplace-Stieltjes transforms, a three-phase model of a network switch was developed. Given that, in this case, the relationship between processing steps is not significant, quality indicators were obtained by taking into account the parameters of single-phase networks. This research identified the dependencies of service latency and service time of incoming network packets on load, as well as equations for finding the volume of a switch’s buffer memory with an acceptable probability for message loss.

Journal ArticleDOI
TL;DR: This paper proposes a TDMA based algorithm named DYSS that meets both the timeliness and energy efficiency in handling the collision and finds an effective way of preparing the initial schedule by using the average two-hop neighbors count.
Abstract: In the current scenario, the growth of IoT based solutions gives rise to the rapid utilisation of WSN. With energy constraint sensor nodes in WSN, the design of energy efficient MAC protocol along with timeliness requirement to handle collision is of paramount importance. Most of the MAC protocols designed for a sensor network follows either contention or scheduled based approach. Contention based approach adapts well to topology changes, whereas it is more costly in handling collision as compared to a schedule based approach. Hence, to reduce the collision along with timeliness, an effective TDMA based slot scheduling algorithm needs to be designed. In this paper, we propose a TDMA based algorithm named DYSS that meets both the timeliness and energy efficiency in handling the collision. This algorithm finds an effective way of preparing the initial schedule by using the average two-hop neighbors count. Finally, the remaining un-allotted nodes are dynamically assigned to slots using a novel approach. The efficiency of the algorithm is evaluated in terms of the number of slots allotted and time elapsed to construct the schedule using the Castalia simulator.

Journal ArticleDOI
TL;DR: The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach.
Abstract: Pervasive Internet of Things (IoT) is a research paradigm that has attracted considerable attention nowadays. The main aim of pervasive IoT is that in the future, the everyday objects (devices) would be accessible, sensed, and interconnected inside the global structure of the Internet. But in most of the pervasive IoT applications, the resources of an IoT device such as storage, processing, and energy are limited; as such there is a need for management of resources in such applications. Multiple aspects related to the data such as the type of data, size of data, number of transmission and reception of data packets, the structure of data, etc are taken into consideration while managing the resources of pervasive IoT applications. Therefore data management is essential for the management of limited resources in such applications. This paper presents the recent studies and related information in data management for pervasive IoT applications having limited resources. This paper also proposes a parallelization based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach.

Journal ArticleDOI
TL;DR: The assessment output demonstrates that the planned method is more effective than past schemes since it is free pairing and it fulfills security and protection prerequisites.
Abstract: Vehicular specially appointed systems named as Vehicular Ad-hoc NETwork (VANET) have been raising dependent on the condition of-art advancements in remote and system communication. The message confirmations among vehicles and infrastructure are fundamental for the VANET security. The genuine personality of vehicles ought not to be uncovered, yet which is just detectable by approved nodes. Existing arrangements either depend vigorously on a carefully designed hardware or cannot fulfill the security necessity. Secured Identity Based Cryptosystem Approach (SIDBC) for intelligent routing protocol is proposed for better results since implementing a secured network for traffic forecasting and efficient routing in dynamically changing environment. Polynomial key generation is utilized for generating identity based pseudonym keys for each and every node that comes under the system. This keying process protects the node from malignant node from passing false information. The assessment output demonstrates that the planned method is more effective than past schemes since it is free pairing and it fulfills security and protection prerequisites.

Journal ArticleDOI
TL;DR: This study carried out a study on congestion in a 2D mesh network, and put forward a novel metric that takes into account the overall operating state of a router in the design of adaptive XY routing algorithm, aiming to improve routing decisions and network performance.
Abstract: The Network-on-Chip (NoC) is an alternative pattern that is considered as an emerging technology for distributed embedded systems The traditional use of multi-cores in computing increase the calculation performance; but affect the network communication causing congestion on nodes which therefore decrease the global performance of the NoC To alleviate this problematic phenomenon, several strategies were implemented, to reduce or prevent the occurrence of congestion, such as network status metrics, new routing algorithm, packets injection control, and switching strategies In this paper, we carried out a study on congestion in a 2D mesh network, through various detailed simulations Our focus was on the most used congestion metrics in NoC According to our experiments and performed simulations under different traffic scenarios, we found that these metrics are less representative, less significant and yet they do not give a true overview of reading within the NoC nodes at a given cycle Our study shows that the use of other complementary information regarding the state of nodes and network traffic flow in the design of a novel metric, can really improve the results In this paper, we put forward a novel metric that takes into account the overall operating state of a router in the design of adaptive XY routing algorithm, aiming to improve routing decisions and network performance We compare the throughput, latency, resource utilization, and congestion occurrence of proposed metric to three published metrics on two specific traffic patterns in a varied packets injection rate Our results indicate that our novel metric-based adaptive XY routing has overcome congestion and significantly improve resource utilization through load balancing; achieving an average improvement rate up to 40 % compared to adaptive XY routing based on the previous congestion metrics

Journal ArticleDOI
TL;DR: A hybrid trust model is presented that separates the malicious and trusted nodes to secure the interaction of vehicle in IOV and shows that the malicious nodes can be clearly identified and discarded on the basis of values of PDR.
Abstract: Trust plays essential role in any securing communications between Vehicles in IOV. This motivated us to design a trust model for IoV communication. In this paper, we initially review literature on IoV and Trust and present a hybrid trust model that separates the malicious and trusted nodes to secure the interaction of vehicle in IOV. Node segregation is done using value of statistics (St). If St of each node lies in the range of mean (m) plus/minus 2 standard deviation (SD) of PDR then nodes behaviour is considered as normal otherwise malicious. The simulation is conducted for different threshold values. Result depicts that PDR of trusted node is 0.63 that is much higher than the PDR of malicious node that is 0.15. Similarly, the Average no. of hops and trust dynamics of trusted nodes are higher than that of malicious node. So, on the basis of values of PDR, number of available hops and Trust dynamics, the malicious nodes can be clearly identified and discarded.

Journal ArticleDOI
TL;DR: A novel approach for mobile ad hoc network is Fuzzy Based Intrusion Detection (FBID) protocol, to identify, analyze and detect a malicious node in different circumstances, which improves the efficiency of the system and does not degrade the system performance in real time.
Abstract: A Security in a mobile ad hoc networks is more vulnerable and susceptible to the environment, because in this network no centralized environment for monitoring individual nodes activity during communication. The intruders are hacked the networks either locally and globally. Now a day’s mobile ad hoc network is an emerging area of research due to its unique characteristics. It’s more vulnerable to detect malicious activities, and error prone in nature due to their dynamic topology configuration. Based on their difficulties of intrusion detection system, in this paper proposed a novel approach for mobile ad hoc network is Fuzzy Based Intrusion Detection (FBID) protocol, to identify, analyze and detect a malicious node in different circumstances. This protocol it improves the efficiency of the system and does not degrade the system performance in real time. This FBID system is more efficient and the performance is compared with AODV, Fuzzy Cognitive Mapping with the following performance metrics: Throughput, Packet Delivery Ratio, Packets Dropped, Routing overhead, Propagation delay and shortest path for delivering packets from one node to another node. The System is robust. It produces the crisp output to the benefit of end users. It provides an integrated solution capable of detecting the majority of security attacks occurring in MANETs.

Journal ArticleDOI
TL;DR: This survey makes a critical analysis of diverse techniques regarding various image inpainting schemes and makes the determination of various research issues and gaps that might be useful for the researchers to promote improved future works on image in Painting schemes.
Abstract: Image inpainting is the process of restoring missing pixels in digital images in a plausible way. A study on image inpainting technique has acquired a significant consideration in various regions, i.e. restoring the damaged and old documents, elimination of unwanted objects, cinematography, retouch applications, etc. Even though, limitations exist in the recovery process due to the establishment of certain artifacts in the restored image areas. To rectify these issues, more and more techniques have been established by different authors. This survey makes a critical analysis of diverse techniques regarding various image inpainting schemes. This paper goes under (i) Analyzing various image inpainting techniques that are contributed in different papers; (ii) Makes the comprehensive study regarding the performance measures and the corresponding maximum achievements in each contribution; (iii) Analytical review concerning the chronological review and various tools exploited in each of the reviewed works. Finally, the survey extends with the determination of various research issues and gaps that might be useful for the researchers to promote improved future works on image inpainting schemes.

Journal ArticleDOI
TL;DR: An effective pixel prediction based on image stegonography is developed, which employs error dependent Deep Convolutional Neural Network (DCNN) classifier for pixel identification and the inverse tetrolet transform is used for extracting the secret message from an embedded image.
Abstract: Image steganography is considered as one of the promising and popular techniques utilized to maintain the confidentiality of the secret message that is embedded in an image. Even though there are various techniques available in the previous works, an approach providing better results is still the challenge. Therefore, an effective pixel prediction based on image stegonography is developed, which employs error dependent Deep Convolutional Neural Network (DCNN) classifier for pixel identification. Here, the best pixels are identified from the medical image based on DCNN classifier using pixel features, like texture, wavelet energy, Gabor, scattering features, and so on. The DCNN is optimally trained using Chicken-Moth search optimization (CMSO). The CMSO is designed by integrating Chicken Swarm Optimization (CSO) and Moth Search Optimization (MSO) algorithm based on limited error. Subsequently, the Tetrolet transform is fed to the predicted pixel for the embedding process. At last, the inverse tetrolet transform is used for extracting the secret message from an embedded image. The experimentation is carried out using BRATS dataset, and the performance of image stegonography based on CMSO-DCNN+tetrolet is evaluated based on correlation coefficient, Structural Similarity Index, and Peak Signal to Noise Ratio, which attained 0.85, 46.981dB, and 0.6388, for the image with noise.

Journal ArticleDOI
TL;DR: This research paper proposes a novel approach by using Boolean rules for the identification of the related and non-related comments in related reviews, which are those which show the behavior of a customer about a particular product.
Abstract: Opinion mining is the technique of analyzing the sentiment, behavior, feelings, emotions, and attitudes of customers about a product, topic, comments on social media, etc. Online shopping has revolutionized the way customers do shopping. The customer likes to visit the online store to find their product of interest. It is becoming more difficult for customers to make purchasing decisions solely based on photos and product descriptions. Customer reviews provides a rich source of information to compare products and make purchasing decisions commonly on the basis of other customer reviews. Clients provide comments in the language of their choice, e.g. the people of Pakistan use Roman script based the Urdu language. Normally such comments are free from scripting rules. Hundreds of comments are given on a single product, which may contain noisy comments. Identifying noisy comments and finding the polarity of these comments is an active area of research. Limited research is being carried out on roman Urdu sentiment analysis. In this research paper, we propose a novel approach by using Boolean rules for the identification of the related and non-related comments. Related reviews are those which show the behavior of a customer about a particular product. Lexicons are built for the identification of noise, positive and negative reviews. The precision of the evaluation results is 68%, recall is also 68% and F-measure is 68Ṫhe accuracy of the whole evaluation is 60%.