scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Information Technology in 2021"


Journal ArticleDOI
TL;DR: The different computational model of SVM and key process for the SVM system development are reviewed and a survey on their applications for image classification is provided.
Abstract: Life of any living being is impossible if it does not have the ability to differentiate between various things, objects, smell, taste, colors, etc. Human being is a good ability to classify the object easily such as different human face, images. This is time of the machine so we want that machine can do all the work like as a human, this is part of machine learning. Here this paper discusses the some important technique for the image classification. What are the techniques through which a machine can learn for the image classification task as well as perform the classification task with efficiently. The most known technique to learn a machine is SVM. Support Vector machine (SVM) has evolved as an efficient paradigm for classification. SVM has a strongest mathematical model for classification and regression. This powerful mathematical foundation gives a new direction for further research in the vast field of classification and regression. Over the past few decades, various improvements to SVM has appeared, such as twin SVM, Lagrangian SVM, Quantum Support vector machine, least square support vector machine, etc., which will be further discussed in the paper, led to the creation of a new approach for better classification accuracy. For improving the accuracy as well as performance of SVM, we must aware of how a kernel function should be selected and what are the different approaches for parameter selection. This paper reviews the different computational model of SVM and key process for the SVM system development. Furthermore provides survey on their applications for image classification.

139 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used three machine learning algorithms namely: logistic regression, Naive Bayes and K-nearest neighbor to predict fraudulent transactions in credit card data, and the performance of these algorithms is measured based on accuracy, sensitivity, specificity, precision, F-measure and area under curve.
Abstract: Financial fraud is a threat which is increasing on a greater pace and has a very bad impact over the economy, collaborative institutions and administration. Credit card transactions are increasing faster because of the advancement in internet technology which leads to high dependence over internet. With the up-gradation of technology and increase in usage of credit cards, fraud rates become challenge for economy. With inclusion of new security features in credit card transactions the fraudsters are also developing new patterns or loopholes to chase the transactions. As a result of which behavior of frauds and normal transactions change constantly. Also the problem with the credit card data is that it is highly skewed which leads to inefficient prediction of fraudulent transactions. In order to achieve the better result, imbalanced or skewed data is pre-processed with the re-sampling (over-sampling or under sampling) technique for better results. The three different proportions of datasets were used in this study and random under-sampling technique was used for skewed dataset. This work uses the three machine learning algorithms namely: logistic regression, Naive Bayes and K-nearest neighbour. The performance of these algorithms is recorded with their comparative analysis. The work is implemented in python and the performance of the algorithms is measured based on accuracy, sensitivity, specificity, precision, F-measure and area under curve. On the basis these measurements logistic regression based model for prediction of fraudulent was found to be a better in comparison to other prediction models developed from Naive Bayes and K-nearest neighbour. Better results are also seen by applying under sampling techniques over the data before developing the prediction model.

85 citations


Journal ArticleDOI
TL;DR: An efficient soybean diseases identification method based on a transfer learning approach by using pretrained AlexNet and GoogleNet convolutional neural networks (CNNs) that achieved higher accuracy and highest efficiency.
Abstract: Plant pathologists desire an accurate and reliable soybean plant disease diagnosis system. In this study, we propose an efficient soybean diseases identification method based on a transfer learning approach by using pretrained AlexNet and GoogleNet convolutional neural networks (CNNs). The proposed AlexNet and GoogleNet CNNs were trained using 649 and 550 image samples of diseased and healthy soybean leaves, respectively, to identify three soybean diseases. We used the five-fold cross-validation strategy. The proposed AlexNet and GoogleNet CNN-based models achieved an accuracy of 98.75% and 96.25%, respectively. This accuracy was considerably higher than that for conventional pattern recognition techniques. The experimental results for the identification of soybean diseases indicated that the proposed model achieved highest efficiency.

77 citations


Journal ArticleDOI
TL;DR: This paper shall provide a comprehensive comparison between Fog and cloud, and proposed some novel applications of Fog for protecting the privacy in IOT based applications.
Abstract: Edge computing (EC) has emerged as an attractive and interesting topic for research. It is an extension of cloud computing, used for non-centralized computing with various new features and solutions. In particular, edge computing deals with Internet of Things (IoT) based applications and services. Fog, which sometimes is also referred as edge, is basically a type of EC model, and its features can support a wide range of applications and services of IoT. In this article, we shall explore importance of Fog in IoT applications. In this paper, we shall provide a comprehensive comparison between Fog and cloud. In the process, we shall also provide practical examples to explain the importance of exploiting each of the properties or attributes of Fog which play critical role in facilitating new applications. In addition, we have proposed some novel applications of Fog for protecting the privacy in IOT based applications.

63 citations


Journal ArticleDOI
TL;DR: The binary classification of tweets is being performed with the help of machine learning algorithms and deep learning algorithms for better results, decision tree gives better results among all other algorithms.
Abstract: COVID-19, affected the entire world because of its non-availability of vaccine. Due to social distancing online social networks are massively used in pandemic times. Information is being shared enormously without knowing the authenticity of the source. Propaganda is one of the type of information that is shared deliberately for gaining political and religious influence. It is the systematic and deliberate way of shaping opinion and influencing thoughts of a person for achieving the desired intention of a propagandist. Various propagandistic messages are being shared during COVID-19 about the deadly virus. We extracted data from twitter using its application program interface (API), Annotation is being performed manually. Hybrid feature engineering is performed for choosing the most relevant features.The binary classification of tweets is being performed with the help of machine learning algorithms. Decision tree gives better results among all other algorithms. For better results feature engineering may be improved and deep learning can be used for classification task.

40 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an ensemble based Max Voting Classifier is proposed on top of three best performing machine learning classifiers and the proposed model produces an enhanced performance label with accuracy score of 99.41%.
Abstract: Healthcare systems around the world are facing huge challenges in responding to trends of the rise of chronic diseases. The objective of our research study is the adaptation of Data Science and its approaches for prediction of various diseases in early stages. In this study we review latest proposed approaches with few limitations and their possible solutions for future work. This study also shows importance of finding significant features that improves results proposed by existing methodologies. This work aimed to build classification models such as Naive Bayes, Logistic Regression, k-Nearest neighbor, Support vector machine, Decision tree, Random Forest, Artificial neural network, Adaboost, XGBoost and Gradient boosting. The experimental study chooses group of features by means of three feature selection approaches such as Correlation-based selection, Information Gain based selection and Sequential feature selection. Various Machine learning classifiers are applied on these feature subsets and based on their performance best feature subset is selected. Finally, ensemble based Max Voting Classifier is proposed on top of three best performing models. The proposed model produces an enhanced performance label with accuracy score of 99.41%.

37 citations


Journal ArticleDOI
TL;DR: In this article, a solution to solve safety, quality and traceability problems in food products by providing healthy electronic food networks based on blockchain technology and internet of things (IoT) is presented.
Abstract: This paper disputes describing a solution to solve safety, quality and traceability problems in food products by providing healthy electronic food networks based on Blockchain technology and internet of things (IoT). The delivery and use of fake food reaches thousands every year, and the system of fake foodstuffs and stakeholders in the food supply chain (FSC) system are not subject to appropriate conflict measures. The current status of food items is recorded at anytime and anywhere with the goal of ensuring the validity of information sources by means of IoT devices. The framework has also realized that data exchange and storage in any stage of the supply chain are enabled by Blockchain supplier ledger technology to ensure that data are available, traceable and unimpaired. It will become evident directly at any point on the network that unsafe food is identified and its further access blocked. The FSC is replicated with a Hyperledger fabric platform and compares its performance with other methods which effectively improve data transparency, enhance food safety and reduce manual operation.

27 citations


Journal ArticleDOI
TL;DR: From the current analysis of COVID-19 data it has been observed that trend of per day number of infection follows linearly and then increases exponentially, and the piecewise linear regression is the best suited model to adopt this property.
Abstract: Outbreak of COVID-19, created a disastrous situation in more than 200 countries around the world. Thus the prediction of the future trend of the disease in different countries can be useful for managing the outbreak. Several data driven works have been done for the prediction of COVID-19 cases and these data uses features of past data for future prediction. In this study the machine learning (ML)-guided linear regression model has been used to address the different types of COVID-19 related issues. The linear regression model has been fitted into the dataset to deal with the total number of positive cases, and the number of recoveries for different states in India such as Maharashtra, West Bengal, Kerala, Delhi and Assam. From the current analysis of COVID-19 data it has been observed that trend of per day number of infection follows linearly and then increases exponentially. This property has been incorporated into our prediction and the piecewise linear regression is the best suited model to adopt this property. The experimental results shows the superiority of the proposed scheme and to the best of our knowledge this is a new approach towards the prediction of COVID-19.

26 citations


Journal ArticleDOI
TL;DR: A hyperparameter selection strategy for the ARIMA model using fusion of differential evolution (DE) and artificial bee colony (ABC) algorithm achieves improved performance in forecasting accuracy along with preserving data trends in multi-step time series forecasting.
Abstract: Time series forecasting is a widely applied approach in sequential data series including the stock market. Time series forecasting can be examined through single step ahead as well as multi-step ahead forecasting despite its proven complex analysis and trends preserving limitations. Auto-regressive integrated moving average (ARIMA) is a widely accepted model for time series prediction. In this paper, we proposed a hyperparameter selection strategy for the ARIMA model using fusion of differential evolution (DE) and artificial bee colony (ABC) algorithm. Modified algorithms retain the exploration and exploitation strategies with the union of evolutionary algorithms with stock market time series data. The modified ABC using DE Optimization induces better generalization and efficient performance as compared to existing ARIMA models. In this paper, experiments are performed over 10 years of the dataset of Oil Drilling and Exploration and Refineries sector of National Stock Exchange and Bombay Stock Exchange from September 1, 2010 to August 31, 2020. Obtained result demonstrates that the proposed strategy using modified ABC-ARIMA hybrid model has superior performance than its counterparts. Proposed strategy achieves improved performance in forecasting accuracy along with preserving data trends in multi-step time series forecasting.

25 citations


Journal ArticleDOI
TL;DR: From the experimental results, it is concluded that accuracy of CNN model can be improved by altering the filter size, which helps in CNN to learn optimum values for variable sized parameters and tuning of different hyper parameters.
Abstract: A sign language recognition system is an attempt to help the speech and the hearing-impaired community. The biggest challenge is to recognize a sign accurately. This can be achieved by training the computers to identify the signs. The accuracy depends on the methods used for classification and prediction which is achieved through machine learning. This research proposes the recognition of American Sign Language by using Support Vector Machine (SVM) and Convolutional Neural Network (CNN). In this work we have also calculated optimal filter size for single and double layer Convolutional Neural Network. In the first phase features from the dataset are extracted. After applying various preprocessing techniques, Support Vector Machine with four different kernels i.e., ‘poly’, ‘linear’, ‘rbf’ and ‘sigmoid’ and Convolutional Neural Networks with single and double layer are applied on training dataset to train the model. Finally, accuracy is calculated and compared for both the techniques. In CNN filters of different sizes have been used and optimal filter size has been found. The experimental results establish that the double layer Convolutional Neural Network achieve an accuracy of 98.58%. Optimal filter size is found out to be 8 × 8 for both single and double layer Convolutional Neural Network. From the experimental results we conclude that accuracy of CNN model can be improved by altering the filter size. This also helps in CNN to learn optimum values for variable sized parameters and tuning of different hyper parameters.

25 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a smart system of air quality monitoring that help in preventing and controlling airborne allergies and reducing the burden of disease and the cost of treatment using meta-heuristic and machine learning algorithms.
Abstract: The air quality monitoring system systematically monitors the level of pollutants in the air by measuring the concentration of the particular pollutant in the surrounding and the outside environment. The strategy for developing the monitoring system should ensure the acceptable quality of data, to store and record the data in the database, the analysis of data and to present the result. In this concern, we have proposed a smart system of air quality monitoring that help in preventing and controlling airborne allergies and reducing the burden of disease and the cost of treatment. The healthcare data layers and healthcare APIs for standardizing the smart health predictive analytics were derived using meta-heuristic and Machine learning algorithms. This work mainly focused on improving the existing expert systems of air quality monitoring. In concern of this, the detail study of various terms related to air quality monitoring has been done and proposed a new approach that able to gives a better outcomes. In the proposed work, the meta-heuristic firefly optimization has been applied to optimize the selected features during feature selection process and then further classified by using support vector machine which predict the index level and gives better precision and recall of 95.7% and 93.1% respectively and accuracy of 94.4% while compare it with the existing approaches.

Journal ArticleDOI
TL;DR: In this paper the nutrient deficiency of a paddy crop is considered and a fair prediction of 76–77% was observed with two tired machine learning models.
Abstract: Nutrient deficiency analysis is essential to ensure good yield. The crop yield is dependent on the nutrient contents and drastically affects the health of the crop. In this paper the nutrient deficiency of a paddy crop is considered. Tensor Flow’s (Google’s Machine Learning Library) is used to build a neural network to classify them into nitrogen, potassium, phosphorous deficiencies or healthy independently. It is necessary to have an optimal balance between nitrogen, potassium and phosphorous content. Tensor Flow’s model identifies the deficiency using a set of images. The result is fed to “machine learning driven layer” to estimate the level of deficiency on a quantitative basis. It specifically makes use of k means-clustering algorithm. It is then evaluated through the rule-matrix to estimate the cropland’s yield. A fair prediction of 76–77% was observed with two tired machine learning models.

Journal ArticleDOI
TL;DR: An algorithm which uses random key generation and assignment of special codes to most occurring color and encoding range of repeated colors has been incorporated to decrease the text file size and give a layer of security along with this DNA components A, T, G, C decreases the chance of recognizing the data as image.
Abstract: Data are vulnerable and need utmost security when they are getting transferred. In image encryption methodologies, the pixels of original images are either manipulated or information is laid inside the image using the image as a cover to protect the data from undesired receivers. In this paper, the images are transferred text files over the unsecured network. We have used an algorithm which uses random key generation and assignment of special codes to most occurring color and encoding range of repeated colors has been incorporated to decrease the text file size and give a layer of security along with this DNA components A, T, G, C decreases the chance of recognizing the data as image.

Journal ArticleDOI
TL;DR: The efficacy of the proposed framework will significantly drive the integration of MANET and WSN in future internet using smart objects as well as evaluating a case study on data harvesting in rural areas.
Abstract: Internet of Things (IoT) has become an emerging platform for connecting real-world objects for desired service provisioning via the Internet. The real-world devices or objects in IoT enable a smart environment with the help of Mobile Ad hoc Networks (MANETs) and Wireless Sensor Networks (WSNs). However, the integration of MANET and WSN in the IoT environment requires a flexible and unified framework for efficient and secure communication between intelligent objects or devices. The state-of-art works in this direction lack standard communication framework for effective implementation of MANET with WSN. In this paper, an efficient and secure communication framework for interconnecting MANET and WSN devices is presented. The proposed communication framework comprises different phases such as collecting data using sensors, sending the data to data processing centers via the mobile access points and finally forwarding the data to data centers for analysis, decision making and reporting purposes. The proposed integration solution is discussed in detail and is evaluated with a case study on data harvesting in rural areas. The efficacy of the proposed framework is reported which will significantly drive the integration of MANET and WSN in future internet using smart objects.

Journal ArticleDOI
TL;DR: In this article, a nested ensemble model using deep learning methods based on long short term memory (LSTM) was proposed to evaluate on intensive care Covid-19 confirmed and death cases of India with different classification metrics.
Abstract: The pandemic of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is spreading all over the world. Medical health care systems are in urgent need to diagnose this pandemic with the support of new emerging technologies like artificial intelligence (AI), internet of things (IoT) and Big Data System. In this dichotomy study, we divide our research in two ways-firstly, the review of literature is carried out on databases of Elsevier, Google Scholar, Scopus, PubMed and Wiley Online using keywords Coronavirus, Covid-19, artificial intelligence on Covid-19, Coronavirus 2019 and collected the latest information about Covid-19. Possible applications are identified from the same to enhance the future research. We have found various databases, websites and dashboards working on real time extraction of Covid-19 data. This will be conducive for future research to easily locate the available information. Secondly, we designed a nested ensemble model using deep learning methods based on long short term memory (LSTM). Proposed Deep-LSTM ensemble model is evaluated on intensive care Covid-19 confirmed and death cases of India with different classification metrics such as accuracy, precision, recall, f-measure and mean absolute percentage error. Medical healthcare facilities are boosted with the intervention of AI as it can mimic human intelligence. Contactless treatment is possible only with the help of AI assisted automated health care systems. Furthermore, remote location self treatment is one of the key benefits provided by AI based systems.

Journal ArticleDOI
TL;DR: The proposed sentiment analysis of Marathi e-news will help the online readers to read the positive news to avoid the depression which may be caused by reading the negative news.
Abstract: Sentiment analysis of online contents related to e-news, product, services etc., become very important in this digital era in order to improve the quality of the service provided. The proposed sentiment analysis of Marathi e-news will help the online readers to read the positive news to avoid the depression which may be caused by reading the negative news. The system will be also used to filter out the news before uploading it online. Machine learning based, knowledge based and hybrid are the three approaches use to perform the sentiment analysis of a text, audio, emotions. The proposed system is polarity-based sentiment analysis of the e-news in Marathi. Marathi ranks third in most spoken languages used in India. Computationally it is low resource language. To compute the polarity of the Marathi e-news text, LSTM, deep learning algorithm is used. The model identifies the polarity with accuracy of 72%.

Journal ArticleDOI
TL;DR: ACs are generated using Particle Swarmoptimization and Simplified Swarm Optimization and they are used in the said processes of RSA and ECC with two android and window emulators, and the processing time, power consumption taken, and security of the said algorithms are analysed.
Abstract: Security is the major concern in mobile or portable devices because the internet community can do their work at any time at any place at anywhere. Today various cryptographic algorithms like RSA, Elliptic Curve Cryptography (ECC), etc., can be used to protect the information in mobile devices. But, they have some limitations viz., energy, battery power, processing speed, operating systems, screen size, resolution, memory size, etc. Providing security for limited power mobile devices is a challenging task. RSA and ECC are normally used in mobile devices. In RSA, both encryption and decryption are of the form xe mod n and in ECC, the scalar point k[P] where k is a scalar and P is a point in EC plays a vital role in performing encryption and decryption. The point arithmetic involved in ECC is a power starving process. To speed up the operations in both cryptographic algorithms, addition chains (AC) are normally used. If the encryption and decryption time get reduced, it ultimately reduces the power consumption. There are several AC algorithms exist in the literature. But, ACs are generated using Particle Swarm Optimization and Simplified Swarm Optimization are proposed in this paper and they are used in the said processes of RSA and ECC with two android and window emulators. The processing time, power consumption taken for encryption, decryption process and security of the said algorithms are also analysed.

Journal ArticleDOI
TL;DR: This paper proposes a model for machine translation of Sanskrit Sentences into English using recurrent neural networks, and uses Support Vector Machine classifier to find English word for a Sanskrit word in case of sentences with more than 5 words.
Abstract: Sanskrit, one of the oldest of human languages, has an algorithmic, calculus like grammar which resembles a computer program. In this paper, we propose a model for machine translation of Sanskrit Sentences into English using recurrent neural networks. We have trained our Recurrent Neural Network by sequence to sequence examples, to account for infrequent cases like extra-long sentences and unusual words. Special weight factors are used during training to account for unusual words. We have employed parallel analysis of all words in source sentence to speed up the translation. Novelty in this work is use of a two pronged approach to find the most suitable word in target language for a word in source language. For simple words, consisting of a single words or a conjunction of at most two words, we use a partial bilingual dictionary and for words formed by conjunction of three or more words we use the output of machine learning classifier module as target word. In case of simple sentences, with up to 5 words, use of a combination of partial dictionary and a classifier improves the translation speed by 30% as compared to by using full dictionary only and accuracy by 10% as compared to using only the output of classifier for translation. We used Support Vector Machine classifier to find English word for a Sanskrit word in case of sentences with more than 5 words and it resulted in 10% more accurate translations as compared to that by using Naive Bayes Classifier.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an effective approach for recognition and identification of rice plant disease based on size, shape and color of lesions in the leaf image, which achieved an accuracy of 99.7% on the dataset.
Abstract: Agriculture is one of the major revenue-producing fields and a source of livelihood in India. On the largest regions in India, rice is cultivated as an essential food. It is observed that rice crops are strongly affected by diseases, that causes major loses in agriculture sector. Plant pathologists are searching for an accurate and reliable diagnosis method for rice plant disease. The machine learning has been used effectively in various areas of crop remote sensing, particularly in the classification of crop diseases. At present time, deep learning is a hot research topic for crop disease identification. In this research, we developed an efficient rice plant disease detection method based on convolution neural network approach. This paper focuses mainly on three well known rice diseases, namely, leaf smut and brown spot caused by fungus and bacterial leaf blight caused by bacteria. This article proposes an effective approach for recognition and identification of rice plant disease based on size, shape and color of lesions in the leaf image. The proposed model applies Otsu’s global thresholding technique to perform image binarization to remove background noise of the image. The proposed method based on fully connected CNN was trained using 4000 image samples of each diseased leaves and 4000 image samples of healthy rice leaves, to detect the three rice diseases. The analysis of result shows that the proposed fully connected CNN is fast and effective approach, which provides an accuracy of 99.7% on the dataset. This accuracy is significantly higher than that for existing plant disease recognition and classification approaches.

Journal ArticleDOI
TL;DR: An Arabic Sign Language (ArSL) recognition system that uses a Leap Motion Controller and Latte Panda is introduced and the experimental results show that the DTW achieved an accuracy of 88% for single- hand gestures and 86% for double-hand gestures and the proposed model’s recognition rate reached 92.3% after applying the Ada-Boosting.
Abstract: According to the World Health Organization (WHO), 466 million people are suffering from hearing loss, i.e., 5% of the world population, of which 432 million (93%) are adults and 34 million (17%) children. The main problem is how deaf and hearing-impaired communicate with people and each other, how they get education or do their daily activities. Sign language is the main communication method for them. Building automatic hand gestures recognition system has many challenges specially in Arabic. Solving recognition problem and practically develop real-time recognition system is another challenge. Several types of research have been conducted on sign language recognition systems but for Arabic Sign Language are very limited. In this paper, an Arabic Sign Language (ArSL) recognition system that uses a Leap Motion Controller and Latte Panda is introduced. The recognition phase depends on two machine learning algorithms: (a) KNN (k-Nearest Neighbor) and (b) SVM (Support Vector Machine). Afterwards, an Ada-Boosting technique is applied to enhance the accuracy of both algorithms. A direct matching technique, DTW (Dynamic Time Wrapping), is applied and compared with AdaBoost. The proposed system is applied on 30 hand gestures which are composed of 20 single-hand gestures and 10 double-hand gestures. The experimental results show that the DTW achieved an accuracy of 88% for single-hand gestures and 86% for double-hand gestures. Overall, the proposed model’s recognition rate reached 92.3% for single-hand gestures and 93% for double-hand gestures after applying the Ada-Boosting. Finally, a prototype of our model was implemented in a single board (Latte Panda) to increase the system’s reliability and mobility.

Journal ArticleDOI
TL;DR: The proposed optimal cluster head selection method choose node as CH subjected to minimal end-to-end delay and congestion index, outperforms when compared with the existing protocol in terms of minimized energy consumption, delay, packet drop ratio, and high throughput.
Abstract: Achieving the best quality as per user requirement is one of the important challenges in Wireless Sensor Network (WSN) containing numerous sensor nodes with limited battery power. The time-critical applications in WSN demand energy-efficient and reliable transmission of data with limited resource availability. To resolve these issues, our proposed optimal cluster head selection method choose node as CH subjected to minimal end-to-end delay and congestion index. The nodes with maximum remaining energy, not being subjected to delay and heavy traffic have been selected as a cluster head (CH) for the next round. The Inter-cluster routing path constructed based on energy, delay, and congestion index cost function to achieve aggregated bandwidth through the establishment of multipath. MQoScMR supports adaptive CH and path selection, tuning QoS parameter according to application requirement. The technique proposed outperforms when compared with the existing protocol in terms of minimized energy consumption, delay, packet drop ratio, and high throughput.

Journal ArticleDOI
TL;DR: This paper builds upon the concepts of blockchain, smart cities, and InterPlanetary File System and further explores possibilities of realizing blockchain enabled smart cities on Inter planetary file System architecture concluded by challenges ahead.
Abstract: Blockchain has emerged as one of the finest and promising technologies in the last decade. Bitcoin started the journey in 2008 and soon blockchain technology paved the way much beyond cryptocurrencies in form of smart contracts deployment, permissioned blockchains, hyperledger, ethereum and the list is endless, so to say with a multitude of variants evolving across. This expansion of the implementations has been across domains, hitherto unthought-of. One such domain with definite connects in future are the evolving smart cities across the countries. Smart city concept pullulates to step-up functional efficiency, share information with the users and better the citizen welfare enabled by information and communication technologies (ICT). The rapidly evolving growth of the smart cities thus has thrown multiple challenges to the widely used traditional way of ensuring seamless, secure, robust exchange of information between devices and entities in the smart city ecosystem. This paper builds upon the concepts of blockchain, smart cities, and InterPlanetary File System and further explores possibilities of realizing blockchain enabled smart cities on InterPlanetary File System architecture concluded by challenges ahead. The proposed architecture is simulated with results in a limited environment.

Journal ArticleDOI
TL;DR: In this article, a novel audio encryption method is proposed to provide information security, which is based on chaotic Henon map and tent map is XORed to create the pseudo-random number sequence as the secret key.
Abstract: Encryption algorithms based on chaos theory are frequently used due to the sensitivity of initial conditions, control parameters, and pseudo-randomness. They are very useful for data encryption for images, audio, or videos. In this paper, a novel audio encryption method is proposed to provide information security. Firstly, the one-dimensional uncompressed 16-bit audio file of even length is taken. Then chaotic Henon map and tent map is XORed to create the pseudo-random number sequence as the secret key. Next, the secret key is XORed with the original audio file to produce a cipher audio file. The encrypted audio file can be used for the e-learning process to create an audio password that can be used as login credentials instead of an invitation link. Security analysis shows that our encryption method is better than other existing methods in many aspects with less time.

Journal ArticleDOI
TL;DR: The relevance of Real-Time Monitoring (RTM) as a supplementary security component of vigilantism in modern network environments is examined, more especially for proper planning, preparedness, and mitigation in case of a cybersecurity incident.
Abstract: The phenomenon of network vigilantism is autonomously attributed to how anomalies and obscure activities from adversaries can be tracked in real-time. Needless to say, in today’s dynamic, virtualized, and complex network environments, it has become undeniably necessary for network administrators, analysts as well as engineers to practice network vigilantism, on traffic as well as other network events in real-time. The reason is to understand the exact security posture of an organization’s network environment at any given time. This is driven by the fact that modern network environments do, not only present new opportunities to organizations but also a different set of new and complex cybersecurity challenges that need to be resolved daily. The growing size, scope, complexity, and volume of networked devices in our modern network environments also makes it hard even for the most experienced network administrators to independently provide the breadth and depth of knowledge needed to oversee or diagnose complex network problems. Besides, with the growing number of Cyber Security Threats (CSTs) in the world today, many organisations have been forced to change the way they plan, develop and implement cybersecurity strategies as a way to reinforce their ability to respond to cybersecurity incidents. This paper, therefore, examines the relevance of Real-Time Monitoring (RTM) as a supplementary security component of vigilantism in modern network environments, more especially for proper planning, preparedness, and mitigation in case of a cybersecurity incident. Additionally, this paper also investigates some of the key issues and challenges surrounding the implementation of RTM for security vigilantism in our modern network environments.

Journal ArticleDOI
TL;DR: An alternative approach for the streamlined physical design of quantum-dot cellular automata (QCA) full-adder circuits in which the placement of input cells and wire crossing congestion are substantially reduced.
Abstract: Nowadays, arithmetic computing is an important subject in computer architectures in which the one-bit full-adder gate plays a significant role. Thus, efficient design of such full-adder component can be beneficial to the overall efficiency of the entire system. In this essay, a novel method for the design and simulation of a combined majority gate toward realization of the one-bit full-adder gate is proposed. We inspect an alternative approach for the streamlined physical design of quantum-dot cellular automata (QCA) full-adder circuits in which the placement of input cells and wire crossing congestion are substantially reduced. The proposed method has outstanding characteristics such as low complexity, reduced area consumption, simplified physical design, and ultra-high speed one-bit full-adder. Based on simulation results the proposed design provides 33.33% reduction in area and 20.00% improvement in complexity as well as 10.49% in 1 Ek reduction in power consumption.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed trust matrix measure, which combines user similarity with weighted trust propagation, which can be used for best movie recommendation and achieved a high accuracy of 83% with 0.74 MSE value.
Abstract: Recommender system (RS) are a type of suggestion to the information overload problem suffered by user of websites that allow the rating of particular item. The movie RS are one of the most efficient, useful, and widespread applications for individual to watch movie with minimum decision time. Many attempts made by the researchers to solve these issues like watching movie, purchasing book etc., through RS, whereas most of the study fails to address cold start problem, data sparsity and malicious attacks. This study address these problems, we propose trust matrix measure in this paper, which combines user similarity with weighted trust propagation. Non cold user passed through different models with trust filter and a cold user generated an optimal score with their preferences for recommendation. Four different recommendation models such as Backpropagation (BPNN) model, SVD (Singular Value Decomposition) model, DNN (Deep Neural Network model) and DNN with Trust were compared to recommend the suitable movie to the user. Results imply that DNN with trust model proved to be the best model with high accuracy of 83% with 0.74 MSE value and can be used for best movie recommendation.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that both the variants of EES-WCA are useful and classify seven different kinds of patterns, and the performance of network quality of services such as packet delivery rate, throughput, end to end delay, and energy consumption confirm the superiority of the EES -WCA algorithm.
Abstract: The Wireless Sensor Network (WSN) is an application-centric network, where the data is collected using sensor nodes and communicated to the server or base station to process raw data and to obtain the decisions. For this, it is essential to maintain efficiency and security to serve critical applications. To deal with this requirement, most of the existing techniques modify the routing techniques to secure the network from one or two attacks, but there are significantly fewer solutions that can face multiple kinds of attacks. Therefore, this paper proposed a data-driven and machine learning-based Energy Efficient and Secure Weighted Clustering Algorithm (EES-WCA). The EES-WCA is a combination of EE-WCA and machine learning-based centralized Intrusion detection system (IDS). This technique first creates network clusters, then, without disturbing the WSN routine activity, collect traffic samples on the base station. The base station consists of two machine learning models: Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) to classify the traffic data and identify the malicious nodes in the network. This technique is validated through the generated traffic from the NS2.35 simulator and is also examined in real-time scenarios. The experimental results demonstrate that both the variants of EES-WCA are useful and classify seven different kinds of patterns. According to the simulation results on validation test data, we found up to 90% detection accuracy. Additionally, in real-time scenarios, it replicates the performance by approximately 75%. The performance of EES-WCA in terms of network quality of services such as packet delivery rate, throughput, end to end delay, and energy consumption confirm the superiority of the EES-WCA algorithm.

Journal ArticleDOI
TL;DR: This paper presents a hybrid multi-criteria decision-making (H-MCDM) algorithm to find a solution by considering different conflicting QoS criteria and finds the CSP that holds the maximum and minimum values of these criteria, respectively.
Abstract: In recent years, cloud computing is becoming an attractive research topic for its emerging issues and challenges. Not only in research but also the enterprises are rapidly adopting cloud computing because of its numerous profitable services. Cloud computing provides a variety of quality of services (QoSs) and allows its users to access these services in the form of infrastructure, platform and software on a subscription basis. However, due to its flexible nature and huge benefits, the demand for cloud computing is rising day by day. As a circumstance, many cloud service providers (CSPs) have been providing services in the cloud market. Therefore, it becomes significantly cumbersome for cloud users to select an appropriate CSP, especially considering various QoS criteria. This paper presents a hybrid multi-criteria decision-making (H-MCDM) algorithm to find a solution by considering different conflicting QoS criteria. The proposed algorithm takes advantage of two well-known MCDM algorithms, namely analytic network process (ANP) and VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), to select the best CSP or alternative. Here, ANP is used to categorize the criteria into subnets and finds the local rank of the CSPs in each subnet, followed by VIKOR, to find the global rank of the CSPs. H-MCDM considers both beneficial and non-beneficial criteria and finds the CSP that holds the maximum and minimum values of these criteria, respectively. We demonstrate the performance of H-MCDM using a real-life test case (case study) and compare the results to show the efficacy. Finally, we perform a sensitivity analysis to show the robustness and stability of our algorithm.

Journal ArticleDOI
TL;DR: This paper has analysed each of the above methods normally and through feature engineering techniques like transformation through Principal Component Axes and considering different train-test folds to find the best performing model, which was found to be KNN interms of all metrics and Logistic Regression in terms of accuracy.
Abstract: With the frequent decline in people’s health due to the hectic lifestyle, increased levels of workload and intake of fast food, there has been an unfortunate growth in the number of patients suffering from cardiovascular diseases each year. Around the world, millions of people die each year due to cardiovascular diseases. While the statistics are eye-opening, with the vast amount of data about heart patients in our hands, we can save millions by detecting the risk at an early stage. With the recent advances in soft computing and fuzzy logic, various algorithmic approaches are employed to tackle the issue of cardiovascular risk assessment through machine learning. Using some of the algorithms of machine learning like Logistic Regression (LR), Naive Bayes (NB), Support vector machine (SVM), and Decision tree (DT), Random Forest (RF) and K-Nearest Neighbours (KNN) classifiers, a model can be built to predict the risk accurately. In this paper, we have analysed each of the above methods normally and through feature engineering techniques like transformation through Principal Component Axes and considering different train-test folds to find the best performing model, which was found to be KNN in terms of all metrics and Logistic Regression in terms of accuracy.

Journal ArticleDOI
TL;DR: This paper provides a fuzzy based approach to path planning in addition to steering a robot out of local minima configuration with a comparative analysis of some non-fuzzy approaches.
Abstract: Autonomous mobile robot path planning is considered as a complex task in unknown, highly uncertain and dynamic environments. The presence of diverse and uncertain objects present in the environment makes the autonomous navigation of a mobile robot a difficult and expensive task. Artificial potential field (APF) provides a good and easy solution to the path planning problem resulting in a collision free navigation of a mobile robot from source to destination. One of the inherent problems in APF based path planning is the Local Minima problem. This paper provides a fuzzy based approach to path planning in addition to steering a robot out of local minima configuration with a comparative analysis of some non-fuzzy approaches.