scispace - formally typeset
Search or ask a question

Showing papers in "Cmc-computers Materials & Continua in 2022"


Journal ArticleDOI
TL;DR: In this paper , a two-stage reversible robust audio watermarking algorithm is proposed to protect medical audio data, which decomposes the medical audio into two independent embedding domains, embeds the robust watermark and the reversible watermark into the two domains respectively, and uses the Hurst exponent to find a suitable position for watermark embedding.
Abstract: The leakage of medical audio data in telemedicine seriously violates the privacy of patients. In order to avoid the leakage of patient information in telemedicine, a two-stage reversible robust audio watermarking algorithm is proposed to protect medical audio data. The scheme decomposes the medical audio into two independent embedding domains, embeds the robust watermark and the reversible watermark into the two domains respectively. In order to ensure the audio quality, the Hurst exponent is used to find a suitable position for watermark embedding. Due to the independence of the two embedding domains, the embedding of the second-stage reversible watermark will not affect the first-stage watermark, so the robustness of the first-stage watermark can be well maintained. In the second stage, the correlation between the sampling points in the medical audio is used to modify the hidden bits of the histogram to reduce the modification of the medical audio and reduce the distortion caused by reversible embedding. Simulation experiments show that this scheme has strong robustness against signal processing operations such as MP3 compression of 48 db, additive white Gaussian noise (AWGN) of 20 db, low-pass filtering, resampling, re-quantization and other attacks, and has good imperceptibility.

138 citations


Journal ArticleDOI
TL;DR: In this paper , the authors used a standard fake hotel review dataset for experimenting and data preprocessing methods and a term frequency-Inverse document frequency (TF-IDF) approach for extracting features and their representation.
Abstract: Fake reviews, also known as deceptive opinions, are used to mislead people and have gained more importance recently. This is due to the rapid increase in online marketing transactions, such as selling and purchasing. E-commerce provides a facility for customers to post reviews and comment about the product or service when purchased. New customers usually go through the posted reviews or comments on the website before making a purchase decision. However, the current challenge is how new individuals can distinguish truthful reviews from fake ones, which later deceives customers, inflicts losses, and tarnishes the reputation of companies. The present paper attempts to develop an intelligent system that can detect fake reviews on e-commerce platforms using n-grams of the review text and sentiment scores given by the reviewer. The proposed methodology adopted in this study used a standard fake hotel review dataset for experimenting and data preprocessing methods and a term frequency-Inverse document frequency (TF-IDF) approach for extracting features and their representation. For detection and classification, n-grams of review texts were inputted into the constructed models to be classified as fake or truthful. However, the experiments were carried out using four different supervised machine-learning techniques and were trained and tested on a dataset collected from the Trip Advisor website. The classification results of these experiments showed that naïve Bayes (NB), support vector machine (SVM), adaptive boosting (AB), and random forest (RF) received 88%, 93%, 94%, and 95%, respectively, based on testing accuracy and tje F1-score. The obtained results were compared with existing works that used the same dataset, and the proposed methods outperformed the comparable methods in terms of accuracy.

104 citations



Journal ArticleDOI
TL;DR: A lightweight CNN classification model based on transfer learning that has a higher classification accuracy and reliability while having a lightweight architecture and few parameters, which can be easily applied to computers without GPU acceleration is proposed.
Abstract: The key to preventing the COVID-19 is to diagnose patients quickly and accurately. Studies have shown that using Convolutional Neural Networks (CNN) to analyze chest Computed Tomography (CT) images is helpful for timely COVID-19 diagnosis. However, personal privacy issues, public chest CT data sets are relatively few, which has limited CNN's application to COVID-19 diagnosis. Also, many CNNs have complex structures and massive parameters. Even if equipped with the dedicated Graphics Processing Unit (GPU) for acceleration, it still takes a long time, which is not conductive to widespread application. To solve above problems, this paper proposes a lightweight CNN classification model based on transfer learning. Use the lightweight CNN MobileNetV2 as the backbone of the model to solve the shortage of hardware resources and computing power. In order to alleviate the problem of model overfitting caused by insufficient data set, transfer learning is used to train the model. The study first exploits the weight parameters trained on the ImageNet database to initialize the MobileNetV2 network, and then retrain the model based on the CT image data set provided by Kaggle. Experimental results on a computer equipped only with the Central Processing Unit (CPU) show that it consumes only 1.06 s on average to diagnose a chest CT image. Compared to other lightweight models, the proposed model has a higher classification accuracy and reliability while having a lightweight architecture and few parameters, which can be easily applied to computers without GPU acceleration. Code:github.com/ZhouJie-520/paper-codes. © 2022 Tech Science Press. All rights reserved.

72 citations


Journal ArticleDOI
TL;DR: This experimental analysis is proposed to distinguish Turkish speakers by extracting the MFCCs from the speech recordings by converting the obtained log filterbanks into decibel-based features-based spectrograms without applying the Discrete Cosine Transform (DCT).
Abstract: : Automatic speaker recognition (ASR) systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these signals. One of the most commonly used methods for feature extraction is Mel Frequency Cepstral Coefficients (MFCCs). Recent researches show that MFCCs are successful in processing the voice signal with high accuracies. MFCCs represents a sequence of voice signal-specific features. This experimental analysis is proposed to distinguish Turkish speakers by extracting the MFCCs from the speech recordings. Since the human perception of sound is not linear, after the filterbank step in the MFCC method, we converted the obtained log filterbanks into decibel (dB) features-based spectrograms without applying the Discrete Cosine Transform (DCT). A new dataset was created with converted spectrogram into a 2-D array. Several learning algorithms were implemented with a 10-fold cross-validation method to detect the speaker. The highest accuracy of 90.2% was achieved using Multi-layer Perceptron (MLP) with tanh activation function. The most important output of this study is the inclusion of human voice as a new feature set.

71 citations


Journal ArticleDOI
TL;DR: In this paper , two different neural network-based learning algorithms have been employed to learn the impact of emotions on signal enhancement or depression, and it was observed that the performance of a proposed system for multisensory integration increases when emotion features were present during enhancing or depression.
Abstract: Progress in understanding multisensory integration in human have suggested researchers that the integration may result into the enhancement or depression of incoming signals. It is evident based on different psychological and behavioral experiments that stimuli coming from different perceptual modalities at the same time or from the same place, the signal having more strength under the influence of emotions effects the response accordingly. Current research inmultisensory integration has not studied the effect of emotions despite its significance and natural influence in multisensory enhancement or depression. Therefore, there is a need to integrate the emotional state of the agent with incoming stimuli for signal enhancement or depression. In this study, two different neural network-based learning algorithms have been employed to learn the impact of emotions on signal enhancement or depression. It was observed that the performance of a proposed system for multisensory integration increases when emotion features were present during enhancement or depression of multisensory signals.

66 citations


Journal ArticleDOI
TL;DR: An intelligent satin bowerbird optimizer based compression technique (ISBO-CT) for remote sensing images is presented in this paper . But the technique is not suitable for the remote sensing applications.
Abstract: Due to latest advancements in the field of remote sensing, it becomes easier to acquire high quality images by the use of various satellites along with the sensing components. But the massive quantity of data poses a challenging issue to store and effectively transmit the remote sensing images. Therefore, image compression techniques can be utilized to process remote sensing images. In this aspect, vector quantization (VQ) can be employed for image compression and the widely applied VQ approach is Linde–Buzo–Gray (LBG) which creates a local optimum codebook for image construction. The process of constructing the codebook can be treated as the optimization issue and the metaheuristic algorithms can be utilized for resolving it. With this motivation, this article presents an intelligent satin bowerbird optimizer based compression technique (ISBO-CT) for remote sensing images. The goal of the ISBO-CT technique is to proficiently compress the remote sensing images by the effective design of codebook. Besides, the ISBO-CT technique makes use of satin bowerbird optimizer (SBO) with LBG approach is employed. The design of SBO algorithm for remote sensing image compression depicts the novelty of the work. To showcase the enhanced efficiency of ISBO-CT approach, an extensive range of simulations were applied and the outcomes reported the optimum performance of ISBO-CT technique related to the recent state of art image compression approaches.

60 citations




Journal ArticleDOI
TL;DR: In this paper , the authors presented an Intelligent Industrial Fault Diagnosis using Sailfish Optimized Inception with Residual Network (IIFD-SOIR) model, which uses a Continuous Wavelet Transform (CWT) is for preprocessed representation of the original vibration signal.
Abstract: In the present industrial revolution era, the industrial mechanical system becomes incessantly highly intelligent and composite. So, it is necessary to develop data-driven and monitoring approaches for achieving quick, trustable, and high-quality analysis in an automated way. Fault diagnosis is an essential process to verify the safety and reliability operations of rotating machinery. The advent of deep learning (DL) methods employed to diagnose faults in rotating machinery by extracting a set of feature vectors from the vibration signals. This paper presents an Intelligent Industrial Fault Diagnosis using Sailfish Optimized Inception with Residual Network (IIFD-SOIR) Model. The proposed model operates on three major processes namely signal representation, feature extraction, and classification. The proposed model uses a Continuous Wavelet Transform (CWT) is for preprocessed representation of the original vibration signal. In addition, Inception with ResNet v2 based feature extraction model is applied to generate high-level features. Besides, the parameter tuning of Inception with the ResNet v2 model is carried out using a sailfish optimizer. Finally, a multilayer perceptron (MLP) is applied as a classification technique to diagnose the faults proficiently. Extensive experimentation takes place to ensure the outcome of the presented model on the gearbox dataset and a motor bearing dataset. The experimental outcome indicated that the IIFD-SOIR model has reached a higher average accuracy of 99.6% and 99.64% on the applied gearbox dataset and bearing dataset. The simulation outcome ensured that the proposed model has attained maximum performance over the compared methods.

49 citations


Journal ArticleDOI
TL;DR: The semantic context model is focused to bring in the usage of adaptive environment and can be mapped to individual User interface (UI) display through smart calculations for versatile UIs.
Abstract: Currently, many mobile devices provide various interaction styles and modes which create complexity in the usage of interfaces. The context offers the information base for the development of Adaptive user interface (AUI) frameworks to overcome the heterogeneity. For this purpose, the ontological modeling has been made for specific context and environment. This type of philosophy states to the relationship among elements (e.g., classes, relations, or capacities etc.) with understandable satisfied representation. The context mechanisms can be examined and understood by any machine or computational framework with these formal definitions expressed in Web ontology language (WOL)/Resource description frame work (RDF). The Protégé is used to create taxonomy in which system is framed based on four contexts such as user, device, task and environment. Some competency questions and use-cases are utilized for knowledge obtaining while the information is refined through the instances of concerned parts of context tree. The consistency of the model has been verified through the reasoning software while SPARQL querying ensured the data availability in the models for defined use-cases. The semantic context model is focused to bring in the usage of adaptive environment. This exploration has finished up with a versatile, scalable and semantically verified context learning system. This model can be mapped to individual User interface (UI) display through smart calculations for versatile UIs.

Journal ArticleDOI
TL;DR: In this paper , the authors present a novel framework which integrates numerous analytical approaches including statistical analysis, sentiment analysis, and text mining to accomplish a competitive analysis of social media sites of the universities.
Abstract: : Education sector has witnessed several changes in the recent past. These changes have forced private universities into fierce competition with each other to get more students enrolled. This competition has resulted in the adoption of marketing practices by private universities similar to commercial brands. To get competitive gain, universities must observe and examine the students’ feedback on their own social media sites along with the social media sites of their competitors. This study presents a novel framework which integrates numerous analytical approaches including statistical analysis, sentiment analysis, and text mining to accomplish a competitive analysis of social media sites of the universities. These techniques enable local universities to utilize social media for the identification of the most-discussed topics by students as well as based on the most unfavorable comments received, major areas for improvement. A comprehensive case study was conducted utilizing the proposed framework for competitive analysis of few top ranked international universities as well as local private universities in Lahore Pakistan. Experimental results show that diversity of shared content, frequency of posts, and schedule of updates, are the key areas for improvement for the local universities. Based on the competitive intelligence gained several recommendations are included in this paper that would enable local universities generally and Riphah international university (RIU) Lahore specifically to promote their brand and increase their attractiveness for potential students using social media and launch successful marketing campaigns targeting a large number of audiences at significantly reduced cost resulting in an increased number of enrolments.

Journal ArticleDOI
TL;DR: In this paper , a deep long short-term memory (DLSTM) model was developed and optimized to forecast all Himalayan states' temperature and rainfall values over the period of 1796-2013 and 1901-2015.
Abstract: : Water received in rainfall is a crucial natural resource for agricul-ture, the hydrological cycle, and municipal purposes. The changing rainfall pattern is an essential aspect of assessing the impact of climate change on water resources planning and management. Climate change affected the entire world, specifically India’s fragile Himalayan mountain region, which has high significance due to being a climatic indicator. The water coming from Himalayan rivers is essential for 1.4 billion people living downstream. Earlier studies either modeled temperature or rainfall for the Himalayan area; however, the combined influence of both in a long-term analysis was not performed utilizing Deep Learning (DL). The present investigation attempted to analyze the time series and correlation of temperature (1796–2013) and rainfall changes (1901–2015) over the Himalayan states in India. The Climate Deep Long Short-Term Memory (CDLSTM) model was developed and optimized to forecast all Himalayan states’ temperature and rainfall values. Facebook’s Prophet (FB-Prophet) model was implemented to forecast and assess the performance of the developed CDLSTM model. The performance of both models was assessed based on various performance metrics and shown significantly higher accuracies and low error rates. of long-term of and for India’s states. A DL-based LSTM model on rigorous hyper-parameters tuning to the The MAPE, NSE, to evaluate the CDLSTM model All the twelve Himalayan states showed increasing after Arun and showed for Himalayan ◦ present investigation found a strong correlation (0.98) A&M

Journal ArticleDOI
TL;DR: In this article , the authors presented a new skin lesion diagnosis model i.e., Deep Learning with Evolutionary Algorithm based Image Segmentation (DL-EAIS) for IoT and cloud-based smart healthcare environments.
Abstract: Nowadays, quality improvement and increased accessibility to patient data, at a reasonable cost, are highly challenging tasks in healthcare sector. Internet of Things (IoT) and Cloud Computing (CC) architectures are utilized in the development of smart healthcare systems. These entities can support real-time applications by exploiting massive volumes of data, produced by wearable sensor devices. The advent of evolutionary computation algorithms and Deep Learning (DL) models has gained significant attention in healthcare diagnosis, especially in decision making process. Skin cancer is the deadliest disease which affects people across the globe. Automatic skin lesion classification model has a highly important application due to its fine-grained variability in the presence of skin lesions. The current research article presents a new skin lesion diagnosis model i.e., Deep Learning with Evolutionary Algorithm based Image Segmentation (DL-EAIS) for IoT and cloud-based smart healthcare environments. Primarily, the dermoscopic images are captured using IoT devices, which are then transmitted to cloud servers for further diagnosis. Besides, Backtracking Search optimization Algorithm (BSA) with Entropy-Based Thresholding (EBT) i.e., BSA-EBT technique is applied in image segmentation. Followed by, Shallow Convolutional Neural Network (SCNN) model is utilized as a feature extractor. In addition, Deep-Kernel Extreme Learning Machine (D-KELM) model is employed as a classification model to determine the class labels of dermoscopic images. An extensive set of simulations was conducted to validate the performance of the presented method using benchmark dataset. The experimental outcome infers that the proposed model demonstrated optimal performance over the compared techniques under diverse measures.

Journal ArticleDOI
TL;DR: In this article, a new deep learning based technique is proposed for citrus disease classification, which uses two different pre-trained deep learning models to detect and classify six different diseases of citrus plants.
Abstract: In recent years, the field of deep learning has played an important role towards automatic detection and classification of diseases in vegetables and fruits. This in turn has helped in improving the quality and production of vegetables and fruits. Citrus fruits are well known for their taste and nutritional values. They are one of the natural and well known sources of vitamin C and planted worldwide. There are several diseases which severely affect the quality and yield of citrus fruits. In this paper, a new deep learning based technique is proposed for citrus disease classification. Two different pre-trained deep learning models have been used in this work. To increase the size of the citrus dataset used in this paper, image augmentation techniques are used. Moreover, to improve the visual quality of images, hybrid contrast stretching has been adopted. In addition, transfer learning is used to retrain the pre-trained models and the feature set is enriched by using feature fusion. The fused feature set is optimized using a meta-heuristic algorithm, the Whale Optimization Algorithm (WOA). The selected features are used for the classification of six different diseases of citrus plants. The proposed technique attains a classification accuracy of 95.7% with superior results when compared with recent techniques.

Journal ArticleDOI
TL;DR: The ring NoC design concept and its simulation in Xilinx ISE 14.7, as well as the communication of functional nodes, are discussed, including the performance of hardware and timing parameters.
Abstract: : The network-on-chip (NoC) technology is frequently referred to as a front-end solution to a back-end problem. The physical substructure that transfers data on the chip and ensures the quality of service begins to collapse when the size of semiconductor transistor dimensions shrinks and growing numbers of intellectual property (IP) blocks working together are integrated into a chip. The system on chip (SoC) architecture of today is so complex that not utilizing the crossbar and traditional hierarchical bus architecture. NoC connectivity reduces the amount of hardware required for routing and functions, allowing SoCs with NoC interconnect fabrics to operate at higher frequencies. Ring (Octagons) is a direct NoC that is specifically used to solve the scalability problem by expanding each node in the shape of an octagon. This paper discusses the ring NoC design concept and its simulation in Xilinx ISE 14.7, as well as the communication of functional nodes. For the field-programmable gate array (FPGA) synthesis, the performance of NoC is evaluated in terms of hardware and timing parameters. The design allows 64 to 256 node communication in a single chip with ‘N’ bit data transfer in the ring NoC. The performance of the NoC is evaluated with variable nodes from 2 to 256 in Digilent manufactured Virtex-5 FPGA hardware.

Journal ArticleDOI
TL;DR: In this article , a dipper-thorned optimization (DTO) algorithm is proposed to solve the feature selection problem, and the strategy of using DTOs as feature selection is evaluated using commonly used data sets from the UCI repository.
Abstract: : Dipper throated optimization (DTO) algorithm is a novel with a very efficient metaheuristic inspired by the dipper throated bird. DTO has its unique hunting technique by performing rapid bowing movements. To show the efficiency of the proposed algorithm, DTO is tested and compared to the algorithms of Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA), Grey Wolf Optimizer (GWO), and Genetic Algorithm (GA) based on the seven unimodal benchmark functions. Then, ANOVA and Wilcoxon rank-sum tests are performed to confirm the effectiveness of the DTO compared to other optimization techniques. Additionally, to demonstrate the proposed algorithm’s suitability for solving complex real-world issues, DTO is used to solve the feature selection problem. The strategy of using DTOs as feature selection is evaluated using commonly used data sets from the University of California at Irvine (UCI) repository. The findings indicate that the DTO outperforms all other algorithms in addressing feature selection issues, demonstrating the proposed algorithm’s capabilities to solve complex real-world situations.


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel deep rank-based average pooling network (DRAPNet) for COVID-19 recognition, which achieved a micro-averaged F1 score of 95.49% by 10 runs over the test set.
Abstract: (Aim) To make a more accurate and precise COVID-19 diagnosis system, this study proposed a novel deep rank-based average pooling network (DRAPNet) model, i.e., deep rank-based average pooling network, for COVID-19 recognition. (Methods) 521 subjects yield 1164 slice images via the slice level selection method. All the 1164 slice images comprise four categories: COVID-19 positive;community-acquired pneumonia;second pulmonary tuberculosis;and healthy control. Our method firstly introduced an improved multiple-way data augmentation. Secondly, an n-conv rank-based average pooling module (NRAPM) was proposed in which rank-based pooling—particularly, rank-based average pooling (RAP)—was employed to avoid overfitting. Third, a novel DRAPNet was proposed based on NRAPM and inspired by the VGG network. Grad-CAM was used to generate heatmaps and gave our AI model an explainable analysis. (Results) Our DRAPNet achieved a micro-averaged F1 score of 95.49% by 10 runs over the test set. The sensitivities of the four classes were 95.44%, 96.07%, 94.41%, and 96.07%, respectively. The precisions of four classes were 96.45%, 95.22%, 95.05%, and 95.28%, respectively. The F1 scores of the four classes were 95.94%, 95.64%, 94.73%, and 95.67%, respectively. Besides, the confusion matrix was given. (Conclusions) The DRAPNet is effective in diagnosing COVID-19 and other chest infectious diseases. The RAP gives better results than four other methods: strided convolution, l2-norm pooling, average pooling, and max pooling.


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel deep rank-based average pooling network (DRAPNet) for COVID-19 recognition, which achieved a micro-averaged F1 score of 95.49% by 10 runs over the test set.
Abstract: (Aim) To make a more accurate and precise COVID-19 diagnosis system, this study proposed a novel deep rank-based average pooling network (DRAPNet) model, i.e., deep rank-based average pooling network, for COVID-19 recognition. (Methods) 521 subjects yield 1164 slice images via the slice level selection method. All the 1164 slice images comprise four categories: COVID-19 positive; community-acquired pneumonia; second pulmonary tuberculosis; and healthy control. Our method firstly introduced an improved multiple-way data augmentation. Secondly, an n-conv rank-based average pooling module (NRAPM) was proposed in which rank-based pooling—particularly, rank-based average pooling (RAP)—was employed to avoid overfitting. Third, a novel DRAPNet was proposed based on NRAPM and inspired by the VGG network. Grad-CAM was used to generate heatmaps and gave our AI model an explainable analysis. (Results) Our DRAPNet achieved a micro-averaged F1 score of 95.49% by 10 runs over the test set. The sensitivities of the four classes were 95.44%, 96.07%, 94.41%, and 96.07%, respectively. The precisions of four classes were 96.45%, 95.22%, 95.05%, and 95.28%, respectively. The F1 scores of the four classes were 95.94%, 95.64%, 94.73%, and 95.67%, respectively. Besides, the confusion matrix was given. (Conclusions) The DRAPNet is effective in diagnosing COVID-19 and other chest infectious diseases. The RAP gives better results than four other methods: strided convolution, l2-norm pooling, average pooling, and max pooling.


Journal ArticleDOI
TL;DR: In this paper , an enhanced brain storm optimization-based algorithm for training neural networks was proposed, which achieved better results than other state-of-the-art approaches on the majority of datasets in terms of classification accuracy and convergence speed.
Abstract: In the domain of artificial neural networks, the learning process represents one of the most challenging tasks. Since the classification accuracy highly depends on the weights and biases, it is crucial to find its optimal or suboptimal values for the problem at hand. However, to a very large search space, it is very difficult to find the proper values of connection weights and biases. Employing traditional optimization algorithms for this issue leads to slow convergence and it is prone to get stuck in the local optima. Most commonly, back-propagation is used for multi-layer-perceptron training and it can lead to vanishing gradient issue. As an alternative approach, stochastic optimization algorithms, such as nature-inspired metaheuristics are more reliable for complex optimization tax, such as finding the proper values of weights and biases for neural network training. In this work, we propose an enhanced brain storm optimization-based algorithm for training neural networks. In the simulations, ten binary classification benchmark datasets with different difficulty levels are used to evaluate the efficiency of the proposed enhanced brain storm optimization algorithm. The results show that the proposed approach is very promising in this domain and it achieved better results than other state-of-the-art approaches on the majority of datasets in terms of classification accuracy and convergence speed, due to the capability of balancing the intensification and diversification and avoiding the local minima. The proposed approach obtained the best accuracy on eight out of ten observed dataset, outperforming all other algorithms by 1–2% on average. When mean accuracy is observed, the proposed algorithm dominated on nine out of ten datasets.


Journal ArticleDOI
TL;DR: It has been concluded that the proposed technique outperforms existing schemes and shows the supremacy of proposed over existing Support Vector Machine, Feed-forward Neural Network, and Adaptive Neuro-Fuzzy Inference System competitive techniques for fruit images classification.
Abstract: : Fruit classification is found to be one of the rising fields in computer and machine vision. Many deep learning-based procedures worked out so far to classify images may have some ill-posed issues. The performance of the classification scheme depends on the range of captured images, the volume of features, types of characters, choice of features from extracted features, and type of classifiers used. This paper aims to propose a novel deep learning approach consisting of Convolution Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) application to classify the fruit images. Classification accuracy depends on the extracted and selected optimal features. Deep learning applications CNN, RNN, and LSTM were collectively involved to classify the fruits. CNN is used to extract the image features. RNN is used to select the extracted optimal features and LSTM is used to classify the fruits based on extracted and selected images features by CNN and RNN. Empirical study shows the supremacy of proposed over existing Support Vector Machine (SVM), Feed-forward Neural Network (FFNN), and Adaptive Neuro-Fuzzy Inference System (ANFIS) competitive techniques for fruit images classification. The accuracy rate of the proposed approach is quite better than the SVM, FFNN, and ANFIS schemes. It has been concluded that the proposed technique outperforms existing schemes.



Journal ArticleDOI
TL;DR: In this paper , the authors investigated the methodology of Real Time Sequential Deep Extreme Learning Machine (RTS-DELM) implemented to wireless Internet of Things (IoT) enabled sensor networks for the detection of any intrusion activity.
Abstract: In recent years, the infrastructure of Wireless Internet of Sensor Networks (WIoSNs) has been more complicated owing to developments in the internet and devices’ connectivity. To effectively prepare, control, hold and optimize wireless sensor networks, a better assessment needs to be conducted. The field of artificial intelligence has made a great deal of progress with deep learning systems and these techniques have been used for data analysis. This study investigates the methodology of Real Time Sequential Deep Extreme Learning Machine (RTS-DELM) implemented to wireless Internet of Things (IoT) enabled sensor networks for the detection of any intrusion activity. Data fusion is a well-known methodology that can be beneficial for the improvement of data accuracy, as well as for the maximizing of wireless sensor networks lifespan. We also suggested an approach that not only makes the casting of parallel data fusion network but also render their computations more effective. By using the Real Time Sequential Deep Extreme Learning Machine (RTS-DELM) methodology, an excessive degree of reliability with a minimal error rate of any intrusion activity in wireless sensor networks is accomplished. Simulation results show that wireless sensor networks are optimized effectively to monitor and detect any malicious or intrusion activity through this proposed approach. Eventually, threats and a more general outlook are explored.


Journal ArticleDOI
TL;DR: In this paper , the authors used transfer learning on multi-class classification using brain Medical resonance imagining (MRI) working to classify the images in four stages, Mild demented (MD), Moderate demented(MOD), Non-demented (ND), Very mild dementia (VMD).
Abstract: Alzheimer's disease is a severe neuron disease that damages brain cells which leads to permanent loss of memory also called dementia. Many people die due to this disease every year because this is not curable but early detection of this disease can help restrain the spread. Alzheimer's is most common in elderly people in the age bracket of 65 and above. An automated system is required for early detection of disease that can detect and classify the disease into multiple Alzheimer classes. Deep learning and machine learning techniques are used to solve many medical problems like this. The proposed system Alzheimer Disease detection utilizes transfer learning on Multi-class classification using brain Medical resonance imagining (MRI) working to classify the images in four stages, Mild demented (MD), Moderate demented (MOD), Non-demented (ND), Very mild demented (VMD). Simulation results have shown that the proposed system model gives 91.70% accuracy. It also observed that the proposed system gives more accurate results as compared to previous approaches.