scispace - formally typeset
Search or ask a question

Showing papers by "Najla Al-Nabhan published in 2021"


Journal ArticleDOI
TL;DR: This study finds that the PAuth scheme proposed by Chen et al. still suffers from some security defects, and an enhanced scheme based on PAuth is proposed, and ProVerif, an automatic cryptographic protocol verifier, is used to analyze the enhanced scheme.
Abstract: To ensure that messages can be securely transmitted between different entities in a smart grid, many researchers have focused on authentication and key exchange schemes. Chen et al. recently proposed an authenticated key exchange scheme called PAuth to overcome the defects of the schemes proposed by Mahmood et al. and Abbasinezhad-Mood et al. However, we found that the PAuth scheme proposed by Chen et al. still suffers from some security defects. In this study, we show the detailed attack steps. Then, an enhanced scheme based on PAuth is proposed. Formal and informal security analyses are provided to demonstrate the improved security of the proposed scheme. ProVerif, an automatic cryptographic protocol verifier, is also used to analyze the enhanced scheme. We present the detailed implementation code used in ProVerif. A comparison of the computational and communication costs with those of some previous schemes is provided in the last section of the paper.

35 citations


Journal ArticleDOI
TL;DR: A topic-aware extractive and abstractive summarization model named T-BERTSum, based on Bidirectional Encoder Representations from Transformers (BERTs), which achieves new state-of-the-art results while generating consistent topics compared with the most advanced method.
Abstract: In the era of social networks, the rapid growth of data mining in information retrieval and natural language processing makes automatic text summarization necessary. Currently, pretrained word embedding and sequence to sequence models can be effectively adapted in social network summarization to extract significant information with strong encoding capability. However, how to tackle the long text dependence and utilize the latent topic mapping has become an increasingly crucial challenge for these models. In this article, we propose a topic-aware extractive and abstractive summarization model named T-BERTSum, based on Bidirectional Encoder Representations from Transformers (BERTs). This is an improvement over previous models, in which the proposed approach can simultaneously infer topics and generate summarization from social texts. First, the encoded latent topic representation, through the neural topic model (NTM), is matched with the embedded representation of BERT, to guide the generation with the topic. Second, the long-term dependencies are learned through the transformer network to jointly explore topic inference and text summarization in an end-to-end manner. Third, the long short-term memory (LSTM) network layers are stacked on the extractive model to capture sequence timing information, and the effective information is further filtered on the abstractive model through a gated network. In addition, a two-stage extractive-abstractive model is constructed to share the information. Compared with the previous work, the proposed model T-BERTSum focuses on pretrained external knowledge and topic mining to capture more accurate contextual representations. Experimental results on the CNN/Daily mail and XSum datasets demonstrate that our proposed model achieves new state-of-the-art results while generating consistent topics compared with the most advanced method.

31 citations


Journal ArticleDOI
TL;DR: Two types of neural networks, known as deep neural networks in its expansion form, a convolutional neural network (CNN) and an auto-encoder, are implemented and by using a new combination of CNN layers one can obtain improved results in classifying Farsi digits.
Abstract: Handwriting recognition remains a challenge in the machine vision field, especially in optical character recognition (OCR). The OCR has various applications such as the detection of handwritten Farsi digits and the diagnosis of biomedical science. In expanding and improving quality of the subject, this research focus on the recognition of Farsi Handwriting Digits and illustration applications in biomedical science. The detection of handwritten Farsi digits is being widely used in most contexts involving the collection of generic digital numerical information, such as reading checks or digits of postcodes. Selecting an appropriate classifier has become an issue highlighted in the recognition of handwritten digits. The paper aims at identifying handwritten Farsi digits written with different handwritten styles. Digits are classified using several traditional methods, including K-nearest neighbor, artificial neural network (ANN), and support vector machine (SVM) classifiers. New features of digits, namely, geometric and correlation-based features, have demonstrated to achieve better recognition performance. A noble class of methods, known as deep neural networks (DNNs), is also used to identify handwritten digits through machine vision. Here, two types of introduce its expansion form, a convolutional neural network (CNN) and an auto-encoder, are implemented. Moreover, by using a new combination of CNN layers one can obtain improved results in classifying Farsi digits. The performances of the DNN-based and traditional classifiers are compared to investigate the improvements in accuracy and calculation time. The SVM shows the best results among the traditional classifiers, whereas the CNN achieves the best results among the investigated techniques. The ANN offers better execution time than the SVM, but its accuracy is lower. The best accuracy among the traditional classifiers based on all investigated features is 99.3% accuracy obtained by the SVM, and the CNN achieves the best overall accuracy of 99.45%.

30 citations


Journal ArticleDOI
TL;DR: Experimental results on two kinds of real-world datasets, bioinformatics and social network datasets, indicate that the proposed spatial convolutional neural network architecture for graph classification is superior to some classic kernels and similar deep learning-based algorithms on 6 out of 8 benchmark data sets.

29 citations


Journal ArticleDOI
TL;DR: The proposed novel IoMT platform enables remote health monitoring and decision-making about the emotion, therefore greatly contribute convenient and continuous emotion-aware healthcare services during COVID-19 pandemic.
Abstract: The Internet of Medical Things (IoMT) is a brand new technology of combining medical devices and other wireless devices to access to the healthcare management systems. This article has sought the possibilities of aiding the current Corona Virus Disease 2019 (COVID-19) pandemic by implementing machine learning algorithms while offering emotional treatment suggestion to the doctors and patients. The cognitive model with respect to IoMT is best suited to this pandemic as every person is to be connected and monitored through a cognitive network. However, this COVID-19 pandemic still remain some challenges about emotional solicitude for infants and young children, elderly, and mentally ill persons during pandemic. Confronting these challenges, this article proposes an emotion-aware and intelligent IoMT system, which contains information sharing, information supervision, patients tracking, data gathering and analysis, healthcare, etc. Intelligent IoMT devices are connected to collect multimodal data of patients in a surveillance environments. The latest data and inputs from official websites and reports are tested for further investigation and analysis of the emotion analysis. The proposed novel IoMT platform enables remote health monitoring and decision-making about the emotion, therefore greatly contribute convenient and continuous emotion-aware healthcare services during COVID-19 pandemic. Experimental results on some emotion data indicate that the proposed framework achieves significant advantage when compared with the some mainstream models. The proposed cognition-based dynamic technology is an effective solution way for accommodating a big number of devices and this COVID-19 pandemic application. The controversy and future development trend are also discussed.

24 citations


Journal ArticleDOI
TL;DR: A novel method for rumor detection on social media is proposed, which integrates entity recognition, sentence reconfiguration and ordinary differential equation network under a unified framework called ESODE.

19 citations


Journal ArticleDOI
TL;DR: This work develops a target detection algorithm based on deep learning technologies, particularly convolutional neural networks and neural network modeling that achieves a 99.82% recognition rate in efficient time and has the capability for real-time performance and accurate target detection.
Abstract: Object detection is an essential technology in the computer vision domain and plays a vital role in intelligent transportation. Intelligent vehicles utilize object detection on images for environment perception. This work develops a target detection algorithm based on deep learning technologies, particularly convolutional neural networks and neural network modeling. Building on the analysis of the traditional Haar-like vehicle recognition algorithm, a vehicle recognition algorithm based on a convolutional neural network with fused edge features (FE-CNN) is proposed. The experimental results demonstrate that FE-CNN improves the recognition precision and the model’s convergence speed through a simple and effective edge feature fusion method. In the experiment conducted using real traffic scene for vehicle recognition, the developed algorithm achieves a 99.82% recognition rate in efficient time, demonstrating the capability for real-time performance and accurate target detection.

17 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an intelligent IoT approach for crowd management with congestion avoidance in the Mina area, which is located in the holy city of Mecca, and implemented a learning mechanism that classifies pilgrims based on the collected data and exploits the advantages of both IoT and cloud infrastructures to monitor crowds within a congested area, identify evacuation paths for pilgrims and guide the pilgrims to avoid congestion in real time.
Abstract: Crowd management is a considerable challenge in many countries including Saudi Arabia, where millions of pilgrims from all over the world visit Mecca to perform the sacred act of Hajj. This holy ritual requires large crowds to perform the same activities during specific times, which makes crowd management both critical and difficult. Without proper crowd management and control, the occurrence of disasters such as stampedes, suffocation, and congestion becomes highly probable. At present, the internet of things (IoT) and its enabling technologies represent efficient solutions for managing and controlling crowd, minimizing casualties, and integrating different intelligent technologies. Moreover, IoT allows intensive interaction and heterogeneous communication among different devices over the internet, thereby generating big data. This paper proposes an intelligent IoT approach for crowd management with congestion avoidance in the Mina area, which is located in the holy city of Mecca. The approach implements a learning mechanism that classifies pilgrims based on the collected data and exploits the advantages of both IoT and cloud infrastructures to monitor crowds within a congested area, identify evacuation paths for pilgrims and guide the pilgrims to avoid congestion in real time. Moreover, the approach attempts to maximize crowd safety based on realistic scenarios by controlling and adapting pilgrim movements according to the characteristics of the possible hazards, pilgrim behavior, and environmental conditions. We evaluated our proposed approach by performing simulations based on real data sets and scenarios.

10 citations


Journal ArticleDOI
TL;DR: The Rerank of Retrieval-based and Transformer-based Conversation model (RRT) is proposed, a novel conversation model that combines the retrieval model with the generation model for the purpose of obtaining context–appropriate response.

6 citations


Journal ArticleDOI
TL;DR: Dual-path CNN with Max Gated block (DCMG) as discussed by the authors is proposed to extract discriminative word embedding and make visual-textual association concern more on remarkable features of both modalities.

6 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an assessment of the added value (AV) in seasonal mean temperature regional simulations by the Rossby Centre Regional Climate Atmospheric Model version 4 (RCA4) over West Africa, with emphasize on the contribution the driving Global Climate Models (GCMs) and the downscaling experiment.

Journal ArticleDOI
TL;DR: In this article, an adaptive compensation control structure synthesizing neural networks learning and novel smooth backlash inverse model was proposed to guarantee that all signals of the closed-loop system are bounded, and the tracking error converges to residual of zero asympotically.
Abstract: In this work, we solve the adaptive actuator backlash compensation control problem of uncertain nonlinear systems. A new generalized backlash model is first proposed, which takes both the actuator perturbation and unidentifiable coupling into account, and hence captures the practical backlash behavior more accurately. Nevertheless, such a model makes the adaptive control design difficult, where the most challenging one is that the unrecognizable coupling makes traditional compensation structure no more feasible. To address this issue, we propose an adaptive compensation control structure synthesizing neural networks learning and novel smooth backlash inverse model. With the established compensator and the iterative control design of compensator input, an adaptive neural controller is subsequently proposed to guarantee that all signals of the closed-loop system are bounded, and the tracking error converges to residual of zero asympotically. Simulation results are given to verify the effectiveness of the proposed control scheme.

Journal ArticleDOI
TL;DR: This paper uses constraint information in consensus function and proposes a Semi-supervised Selective Clustering Ensemble based on Chameleon (SSCEC) and Semi- supervised Selectives Clustered Ensemblebased on Ncut (SSSCEN) to solve the above problem.

Journal ArticleDOI
TL;DR: With the fast development of various computing paradigms, the amount of data is rapidly increasing that brings the huge storage overhead, however, the existing data deduplication techniques do not ...
Abstract: With the fast development of various computing paradigms, the amount of data is rapidly increasing that brings the huge storage overhead. However, the existing data deduplication techniques do not ...

Journal ArticleDOI
TL;DR: A novel DE algorithm based on local fitness landscape called LFLDE is proposed, in which the local Fitness landscape information of the problem is investigated to guide the selection of the mutation strategy for each given problem at each generation.
Abstract: The performance of differential evolution (DE) algorithm highly depends on the selection of mutation strategy. However, there are six commonly used mutation strategies in DE. Therefore, it is a challenging task to choose an appropriate mutation strategy for a specific optimization problem. For a better tackle this problem, in this paper, a novel DE algorithm based on local fitness landscape called LFLDE is proposed, in which the local fitness landscape information of the problem is investigated to guide the selection of the mutation strategy for each given problem at each generation. In addition, a novel control parameter adaptive mechanism is used to improve the proposed algorithm. In the experiments, a total of 29 test functions originated from CEC2017 single-objective test function suite which are utilized to evaluate the performance of the proposed algorithm. The Wilcoxon rank-sum test and Friedman rank test results reveal that the performance of the proposed algorithm is better than the other five representative DE algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors explored two computing techniques to unveil the integrity of the data collectors from two different perspectives, namely conflict analysis and learning-based analysis, to identify up to 74% and 99% of the unreliable data collectors respectively.
Abstract: Mass gatherings (such as Hajj/Umrah), owing to their immensity, often present a variety of difficulties to the attendees. To have a comprehensive understanding of the difficulties as well as their potential remedies, surveying a good number of attendees is unavoidable and this can be facilitated through engaging data collectors. A crucial part here is identifying the integrity of the data collectors, which is yet to be explored in the literature to the best of our knowledge. To address this gap, in this study, we first perform a mass-scale data collection over Hajj/Umrah pilgrims through online (n = 236) and in-person (n = 752) surveys, where we cover a substantial part (n = 712) through paid data collectors (n = 53). We critically investigate data collection activities of the data collectors through focused group discussions involving expert reviewers. We explore two computing techniques to unveil the integrity of the data collectors from two different perspectives. Our study finds out influential (religious and socio-geographical) aspects that impact the data collection process. Besides, we find that the collaborative participation of expert reviewers is obligatory to scrutinize the data collectors’ integrity. Additionally, our explored computing techniques, namely conflict analysis and learning-based analysis, can identify up to 74% and 99% of the unreliable data collectors respectively. We observe that, although these computing-based filtering can indicate the integrity up to a certain level, human-in-the-loop is unavoidable for concluding on the integrity. To the best of our knowledge, this is the first study of its kind (i.e., integrity analysis of the data collectors) in the literature.

Journal ArticleDOI
TL;DR: In this article, an Improved Deep Recursive Residual Network (IDRRN) super-resolution model is proposed to decrease the difficulty of network training The deep recursive structure is configured to control the model parameter number while increasing the network depth At the same time, the short-path recursive connections are used to alleviate the gradient disappearance and enhance the feature propagation.
Abstract: Single-frame image super-resolution (SISR) technology in remote sensing is improving fast from a performance point of view Deep learning methods have been widely used in SISR to improve the details of rebuilt images and speed up network training However, these supervised techniques usually tend to overfit quickly due to the models’ complexity and the lack of training data In this paper, an Improved Deep Recursive Residual Network (IDRRN) super-resolution model is proposed to decrease the difficulty of network training The deep recursive structure is configured to control the model parameter number while increasing the network depth At the same time, the short-path recursive connections are used to alleviate the gradient disappearance and enhance the feature propagation Comprehensive experiments show that IDRRN has a better improvement in both quantitation and visual perception

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a friend closeness based user matching algorithm (FCUM), which is a semi-supervised and end-to-end cross-network matching algorithm.
Abstract: The typical aim of user matching is to detect the same individuals cross different social networks. The existing efforts in this field usually focus on the users' attributes and network embedding, but these methods often ignore the closeness between the users and their friends. To this end, we present a friend closeness based user matching algorithm (FCUM). It is a semi-supervised and end-to-end cross networks user matching algorithm. Attention mechanism is used to quantify the closeness between users and their friends. We considers both individual similarity and their close friends similarity by jointly optimize them in a single objective function. Quantification of close friends improves the generalization ability of the FCUM. Due to the expensive costs of labeling new match users for training FCUM, we also design a bi-directional matching strategy. Experiments on real datasets illustrate that FCUM outperforms other state-of-the-art methods that only consider the individual similarity.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a new similarity calculation method combining weights and random walks, which uses weights and similarities to update labels in the process of label propagation, improving the accuracy and stability of community detection.
Abstract: Community detection is a complex and meaningful process, which plays an important role in studying the characteristics of complex networks. In recent years, the discovery and analysis of community structures in complex networks has attracted the attention of many scholars, and many community discovery algorithms have been proposed. Many existing algorithms are only suitable for small-scale data, not for large-scale data, so it is necessary to establish a stable and efficient label propagation algorithm to deal with massive data and complex social networks. In this paper, we propose a novel label propagation algorithm, called WRWPLPA (Parallel Label Propagation Algorithm based on Weight and Random Walk). WRWPLPA proposes a new similarity calculation method combining weights and random walks. It uses weights and similarities to update labels in the process of label propagation, improving the accuracy and stability of community detection. First, weight is calculated by combining the neighborhood index and the position index, and the weight is used to distinguish the importance of the nodes in the network. Then, use random walk strategy to describe the similarity between nodes, and the label of nodes are updated by combining the weight and similarity. Finally, parallel propagation is comprehensively proposed to utilize label probability efficiently. Experiment results on artificial network datasets and real network datasets show that our algorithm has improved accuracy and stability compared with other label propagation algorithms.

Journal ArticleDOI
TL;DR: This paper analyses the impact of variables on the converter performance in terms of conversion efficiency, the bit error rate (BER) performance and conversion-associated power penalty in the hybrid gigabit passive optical network and the fifth-generation (5G) system.
Abstract: Digital signal processing (DSP)-enabled dual–parallel Mach–Zehnder (DP-MZM)-based spectral converter can overcome traditional converter insufficient to realize high transparency, dynamic re-configurable, and low-cost spectral conversion. However, in practice, the converter’s driving RF signals have variables caused by physical device deviations. To explore the converter robustness, this paper, through numinous numerically simulations in the intensity-modulation and direct-detection (IMDD)-based network nodes, analyses the impact of variables on the converter performance in terms of conversion efficiency, the bit error rate (BER) performance and conversion-associated power penalty in the hybrid gigabit passive optical network (GPON) and the fifth-generation (5G) system. Simulation results demonstrate the converter robustness on driving radio frequency (RF) signals’ amplitudes, phases, and frequency acceptable variables range, meanwhile, the frequency variables restrict the user signal bandwidth and can limit the converter practical application. In addition, based on variable analysis, this paper suggests a driving approach to eliminate the frequency variables for practical application and demonstrate the simultaneous impact of amplitudes and phases variables on the converter performance.

Journal ArticleDOI
TL;DR: The general architecture of event-state-combination diagnosable system means that not only each combined fault can be detected, but also the system can determine whether it will work permanently in the failure states after the combined fault occurs.
Abstract: Diagnosability is an important characteristic indicator to determine whether the system is stable and reliable. In this paper, the general architecture of event-state-combination diagnosability is investigated. The contributions are threefold. First, the notion of event-state-combination diagnosability is formalized. Roughly speaking, an event-state-combination diagnosable system means that not only each combined fault can be detected, but also the system can determine whether it will work permanently in the failure states after the combined fault occurs. Then, an automaton with new information structure, called event-state-combination verifier, is constructed, which can be used for the verification of the event-state-combination diagnosability. Finally, the necessary and sufficient conditions for verifying whether the system is event-state-combination diagnosable is presented, that is, the event-state-combination verifier does not have any failure confused cycle.