scispace - formally typeset
Search or ask a question
Author

Najla Al-Nabhan

Bio: Najla Al-Nabhan is an academic researcher from King Saud University. The author has contributed to research in topics: Wireless sensor network & Computer science. The author has an hindex of 8, co-authored 51 publications receiving 182 citations. Previous affiliations of Najla Al-Nabhan include University of Tabuk & Nanjing Institute of Technology.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: The approach of combined segmentation and classification is effective for plant disease identification, and the empirical research validates the advantages of the proposed method.
Abstract: Agriculture is one of the most important sources of income for people in many countries. However, plant disease issues influence many farmers, as diseases in plants often naturally occur. If proper care is not taken, diseases can have hazardous effects on plants and influence the product quality, quantity or productivity. Therefore, the detection and prevention of plant diseases are serious concerns and should be considered to increase productivity. An effective identification technology can be beneficial for monitoring plant diseases. Generally, the leaves of plants show the first signs of plant disease, and most diseases can be detected from the symptoms that appear on the leaves. Therefore, this paper introduces a novel method for the detection of plant leaf diseases. The method is divided into two parts: image segmentation and image classification. First, a hue, saturation and intensity-based and LAB-based hybrid segmentation algorithm is proposed and used for the disease symptom segmentation of plant disease images. Then, the segmented images are input into a convolutional neural network for image classification. The validation accuracy obtained using this approach was approximately 15.51% higher than that for the conventional method. Additionally, the detection results showed that the average detection rate was 75.59% under complex background conditions, and most of the diseases were effectively detected. Thus, the approach of combined segmentation and classification is effective for plant disease identification, and our empirical research validates the advantages of the proposed method.

50 citations

Journal ArticleDOI
TL;DR: This study finds that the PAuth scheme proposed by Chen et al. still suffers from some security defects, and an enhanced scheme based on PAuth is proposed, and ProVerif, an automatic cryptographic protocol verifier, is used to analyze the enhanced scheme.
Abstract: To ensure that messages can be securely transmitted between different entities in a smart grid, many researchers have focused on authentication and key exchange schemes. Chen et al. recently proposed an authenticated key exchange scheme called PAuth to overcome the defects of the schemes proposed by Mahmood et al. and Abbasinezhad-Mood et al. However, we found that the PAuth scheme proposed by Chen et al. still suffers from some security defects. In this study, we show the detailed attack steps. Then, an enhanced scheme based on PAuth is proposed. Formal and informal security analyses are provided to demonstrate the improved security of the proposed scheme. ProVerif, an automatic cryptographic protocol verifier, is also used to analyze the enhanced scheme. We present the detailed implementation code used in ProVerif. A comparison of the computational and communication costs with those of some previous schemes is provided in the last section of the paper.

35 citations

Journal ArticleDOI
TL;DR: A topic-aware extractive and abstractive summarization model named T-BERTSum, based on Bidirectional Encoder Representations from Transformers (BERTs), which achieves new state-of-the-art results while generating consistent topics compared with the most advanced method.
Abstract: In the era of social networks, the rapid growth of data mining in information retrieval and natural language processing makes automatic text summarization necessary. Currently, pretrained word embedding and sequence to sequence models can be effectively adapted in social network summarization to extract significant information with strong encoding capability. However, how to tackle the long text dependence and utilize the latent topic mapping has become an increasingly crucial challenge for these models. In this article, we propose a topic-aware extractive and abstractive summarization model named T-BERTSum, based on Bidirectional Encoder Representations from Transformers (BERTs). This is an improvement over previous models, in which the proposed approach can simultaneously infer topics and generate summarization from social texts. First, the encoded latent topic representation, through the neural topic model (NTM), is matched with the embedded representation of BERT, to guide the generation with the topic. Second, the long-term dependencies are learned through the transformer network to jointly explore topic inference and text summarization in an end-to-end manner. Third, the long short-term memory (LSTM) network layers are stacked on the extractive model to capture sequence timing information, and the effective information is further filtered on the abstractive model through a gated network. In addition, a two-stage extractive-abstractive model is constructed to share the information. Compared with the previous work, the proposed model T-BERTSum focuses on pretrained external knowledge and topic mining to capture more accurate contextual representations. Experimental results on the CNN/Daily mail and XSum datasets demonstrate that our proposed model achieves new state-of-the-art results while generating consistent topics compared with the most advanced method.

31 citations

Journal ArticleDOI
TL;DR: Two types of neural networks, known as deep neural networks in its expansion form, a convolutional neural network (CNN) and an auto-encoder, are implemented and by using a new combination of CNN layers one can obtain improved results in classifying Farsi digits.
Abstract: Handwriting recognition remains a challenge in the machine vision field, especially in optical character recognition (OCR). The OCR has various applications such as the detection of handwritten Farsi digits and the diagnosis of biomedical science. In expanding and improving quality of the subject, this research focus on the recognition of Farsi Handwriting Digits and illustration applications in biomedical science. The detection of handwritten Farsi digits is being widely used in most contexts involving the collection of generic digital numerical information, such as reading checks or digits of postcodes. Selecting an appropriate classifier has become an issue highlighted in the recognition of handwritten digits. The paper aims at identifying handwritten Farsi digits written with different handwritten styles. Digits are classified using several traditional methods, including K-nearest neighbor, artificial neural network (ANN), and support vector machine (SVM) classifiers. New features of digits, namely, geometric and correlation-based features, have demonstrated to achieve better recognition performance. A noble class of methods, known as deep neural networks (DNNs), is also used to identify handwritten digits through machine vision. Here, two types of introduce its expansion form, a convolutional neural network (CNN) and an auto-encoder, are implemented. Moreover, by using a new combination of CNN layers one can obtain improved results in classifying Farsi digits. The performances of the DNN-based and traditional classifiers are compared to investigate the improvements in accuracy and calculation time. The SVM shows the best results among the traditional classifiers, whereas the CNN achieves the best results among the investigated techniques. The ANN offers better execution time than the SVM, but its accuracy is lower. The best accuracy among the traditional classifiers based on all investigated features is 99.3% accuracy obtained by the SVM, and the CNN achieves the best overall accuracy of 99.45%.

30 citations

Journal ArticleDOI
TL;DR: Experimental results on two kinds of real-world datasets, bioinformatics and social network datasets, indicate that the proposed spatial convolutional neural network architecture for graph classification is superior to some classic kernels and similar deep learning-based algorithms on 6 out of 8 benchmark data sets.

29 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this study, EfficientNet deep learning architecture was proposed in plant leaf disease classification and the performance of this model was compared with other state-of-the-art deep learning models.

290 citations

Journal ArticleDOI
TL;DR: Experimental results proved that the proposed deep learning-based system can efficiently classify various types of plant leaves with good accuracy and signify its real-time performance.

74 citations

Journal ArticleDOI
01 Jun 2021-Sensors
TL;DR: In this article, the authors discuss the potential impact of the pandemic on the adoption of the Internet of Things (IoT) in various broad sectors, namely healthcare, smart homes, smart buildings, smart cities, transportation and industrial IoT.
Abstract: COVID-19 has disrupted normal life and has enforced a substantial change in the policies, priorities and activities of individuals, organisations and governments. These changes are proving to be a catalyst for technology and innovation. In this paper, we discuss the pandemic’s potential impact on the adoption of the Internet of Things (IoT) in various broad sectors, namely healthcare, smart homes, smart buildings, smart cities, transportation and industrial IoT. Our perspective and forecast of this impact on IoT adoption is based on a thorough research literature review, a careful examination of reports from leading consulting firms and interactions with several industry experts. For each of these sectors, we also provide the details of notable IoT initiatives taken in the wake of COVID-19. We also highlight the challenges that need to be addressed and important research directions that will facilitate accelerated IoT adoption.

69 citations

Journal ArticleDOI
TL;DR: In this paper, the authors highlight vehicle networks' evolution from vehicular ad-hoc networks (VANET) to the internet of vehicles (IoVs), listing their benefits and limitations.
Abstract: Determining how to structure vehicular network environments can be done in various ways. Here, we highlight vehicle networks' evolution from vehicular ad-hoc networks (VANET) to the internet of vehicles (IoVs), listing their benefits and limitations. We also highlight the reasons in adopting wireless technologies, in particular, IEEE 802.11p and 5G vehicle-to-everything, as well as the use of paradigms able to store and analyze a vast amount of data to produce intelligence and their applications in vehicular environments. We also correlate the use of each of these paradigms with the desire to meet existing intelligent transportation systems' requirements. The presentation of each paradigm is given from a historical and logical standpoint. In particular, vehicular fog computing improves on the deficiences of vehicular cloud computing, so both are not exclusive from the application point of view. We also emphasize some security issues that are linked to the characteristics of these paradigms and vehicular networks, showing that they complement each other and share problems and limitations. As these networks still have many opportunities to grow in both concept and application, we finally discuss concepts and technologies that we believe are beneficial. Throughout this work, we emphasize the crucial role of these concepts for the well-being of humanity.

64 citations

Journal ArticleDOI
TL;DR: This paper proposes to properly tune IEEE 802.15.4 MAC parameters (macMinBE and macMaxCSMABackoffs) and the sampling frequency of deployed sensor nodes and shows that its scheme provides an efficient increase of sampling Frequency of sensor nodes while satisfying application requirements.
Abstract: The monitoring and control of crops in precision agriculture sometimes requires a high collection frequency of information (e.g., temperature, humidity, and salinity) due to the variability in crops. Data acquisition and transmission are generally achieved thanks to wireless sensor networks. However, sensor nodes have limited resources. Thus, it is necessary to adapt the increase in sampling frequency for different crops, under application constraints (reliability, packet delay, and lifetime duration). In this paper, we propose to properly tune IEEE 802.15.4 MAC parameters ( macMinBE and macMaxCSMABackoffs ) and the sampling frequency of deployed sensor nodes. An analytical model of network performance is derived and used to perform the tuning of these tradeoff parameters. Simulation analysis shows that our scheme provides an efficient increase of sampling frequency of sensor nodes while satisfying application requirements.

47 citations