scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Detection of Corn Gray Leaf Spot Severity Levels using Deep Learning Approach

TL;DR: In this paper, a simple Convolutional Neural Network (CNN) based deep learning (DL) model has been proposed for multi-classification of corn gray leaf spot (CGLS) disease based on five different severity levels of CGLS disease on the corn plant.
Abstract: A simple Convolutional neural network (CNN) based deep learning (DL) model has been proposed for multi-classification of corn gray leaf spot (CGLS) disease based on five different severity levels of CGLS disease on the corn plant. Certain corn leaf diseases like CGLS, common rust, and leaf blight are quite common and dangerous in corn harvest. Hence, the current work presents a solution for CGLS disease detection on corn plants using a multi-classification DL model which gives the best detection accuracy of 95.33% in high-risk severity level image. Along with this comparison of five different severity levels has also been conducted based on resulted performance measures (PM).
Citations
More filters
Proceedings ArticleDOI
28 Apr 2022
TL;DR: This work cogitates three paddy leaf diseases for the creation of an AI-based robust detection and classification model using a novel approach to the convolutional neural network with the combination of augmentation and a CNN model tuner.
Abstract: A variety of fungal and bacterial leaf ailments wreak havoc on the paddy plant in the agricultural field. Early diagnosis of leaf infection can improve the yield of the crop. The modeling of an automatic disease classifier aids farmers in handling the spread of leaf disease in the agricultural field. This work cogitates three paddy leaf diseases (Bacterial blight, leaf smut, and leaf blast) for the creation of an AI-based robust detection and classification model. The dataset is collected from a variety of standard online repositories. GAN-based augmentation technique was used for increasing the size of the dataset. A novel approach to the convolutional neural network is proposed with the combination of augmentation and a CNN model tuner. The performance of CNN is evaluated in terms of accuracy achieved is 98.23\% in the classification process.

24 citations

Journal ArticleDOI
TL;DR: In this article , a hybrid prediction model was developed to predict various levels of severity of blast disease based on diseased plant images, which achieved 97% accuracy with the help of CNN and SVM.
Abstract: Hypothesis: Due to the increase in the losses in paddy yield as a result of various paddy diseases, researchers are working tirelessly for a technological solution to assist farmers in making decisions about disease severity and potential danger to the crop. Early prediction of infection severity would facilitate resources for the treatment of the infection and prevent contamination to the whole field. Methodology: In this study, a hybrid prediction model was developed to predict various levels of severity of blast disease based on diseased plant images. The proposed model is a four-fold severity prediction model. The level of severity is defined based on the percentage of leaf area affected by the disease. The image dataset is derived from both primary and secondary resources. Tools: The features are first extracted with the help of the Convolutional Neural Network (CNN) approach. Then the identification and classification of the severity level of blast disease are conducted using a Support Vector Machine (SVM). Conclusion: Mendeley, Kaggle, GitHub, and UCI are the secondary resources used for dataset generation. The number of images in the dataset is 1908. The proposed hybrid model achieves 97% accuracy.

18 citations

Proceedings ArticleDOI
28 Apr 2022
TL;DR: This paper will first explain Python as a language, then introduce Data Science, Machine learning, and IOT, describing prominent packages in the Data Science and Machine learning community, such as NumPy, SciPy, TensorFlow, Keras, Matplotlib.
Abstract: Python is an object-oriented, scripting, and interpretive programming language that may be used for mentoring and real-world applications. This paper focusses primarily on Python software packages used in data science, pattern recognition, and IoT. This paper will first explain Python as a language, then introduce Data Science, Machine learning, and IOT, describing prominent packages in the Data Science and Machine learning community, such as NumPy, SciPy, TensorFlow, Keras, Matplotlib. This paper will also demonstrate the significance of Python in the development of the industry. Throughout, we shall utilize many code samples. We review so many research papers to analyze the usage of python in different fields and easily import packages in the programming software.

2 citations

Proceedings ArticleDOI
28 Apr 2022
TL;DR: This paper used intelligent agent-based machine learning techniques to select the best suppliers and for further process seed it in case base to form the basis intelligent system that would operate in complex environment.
Abstract: Agents technologies are essential in learning and case base reasoning components to use incritical elements of the Cognitive knowledge base. The possibility of integrating several technologies into a basic structure that makes a kernel of an intelligent strategy. An alternative design is proposed in this paper to form the basis intelligent system that would operate in complex environment. This structure is very flexible because it allows adaptation via learning and alteration of the user knowledge. This paper used intelligent agent-based machine learning techniques to select the best suppliers and for further process seed it in case base. In this paper, we will also discuss the framework of machine learning techniques and CBR (Case Based-Reasoning) integrates with intelligent agents improving the resulting quality to solve highly complex problems.

2 citations

Proceedings ArticleDOI
28 Apr 2022
TL;DR: Empirical findings on using soft computing technology applications insupply chain management to improve the supply chain efficiency are compiled.
Abstract: Participating members in a manufacturing supply chain usually use an information system like enterprise resources planning (ERP) for planning and scheduling activities independently. Recent research indicates that there is a need to handle such distributed activities in an integrated manner, especially under uncertain and fast changing environments. A multi agent system, a branch of distributed artificial intelligence, is a contemporary modeling technique for a distributed system in the manufacturing domain. This distributed modeling technique is suitable for integrating supply chain networks, which have distributed entities within the system. Each system makes decision locally, i.e., a local information system. By adopting a multi agent modeling technique, supply chain networks can be built efficiently based on an information system. This paper compile empirical findings on using soft computing technology applications insupply chain management to improve the supply chain efficiency.
References
More filters
Journal ArticleDOI
TL;DR: Two improved models based on deep learning that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers.
Abstract: In the field of agricultural information, the automatic identification and diagnosis of maize leaf diseases is highly desired. To improve the identification accuracy of maize leaf diseases and reduce the number of network parameters, the improved GoogLeNet and Cifar10 models based on deep learning are proposed for leaf disease recognition in this paper. Two improved models that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers. In addition, the number of parameters of the improved models is significantly smaller than that of the VGG and AlexNet structures. During the recognition of eight kinds of maize leaf diseases, the GoogLeNet model achieves a top - 1 average identification accuracy of 98.9%, and the Cifar10 model achieves an average accuracy of 98.8%. The improved methods are possibly improved the accuracy of maize leaf disease, and reduced the convergence iterations, which can effectively improve the model training and recognition efficiency.

361 citations

Journal ArticleDOI
TL;DR: This study indicates that the performance of the optimized DenseNet model is close to that of the established CNN architectures with far fewer parameters and computation time.

156 citations

Proceedings ArticleDOI
03 Jul 2017
TL;DR: The architecture of IoT, protocols used in IoT, its security issues and smart city based IoT applications are described.
Abstract: Evolving methodologies are seamlessly connecting the real world and the virtual world using some physical objects and intelligent sensors. Internet of Things (IoT) is one such methodology. Things are becoming smarter than before. IoT empowers users to communicate and control physical devices to salvage vital information. Large amounts of data will be generated and exchanged which in turn will help in decision making. This survey paper describes the architecture of IoT, protocols used in IoT, its security issues and smart city based IoT applications.

98 citations

Journal ArticleDOI
TL;DR: The presented corn plant disease recognition model is capable of running on standalone smart devices like raspberry-pi or smart-phone and drones, and achieves an accuracy of 88.46% demonstrating the feasibility of this method.

83 citations

Proceedings ArticleDOI
01 Jan 2017
TL;DR: In this article, the authors proposed a novel algorithm based on deep learning neural networks using appropriate activation function and regularization layer, which shows significantly improved accuracy compared to the existing Arabic numeral recognition methods.
Abstract: Handwritten character recognition is an active area of research with applications in numerous fields. Past and recent works in this field have concentrated on various languages. Arabic is one language where the scope of research is still widespread, with it being one of the most popular languages in the world and being syntactically different from other major languages. Das et al. [1] has pioneered the research for handwritten digit recognition in Arabic. In this paper, we propose a novel algorithm based on deep learning neural networks using appropriate activation function and regularization layer, which shows significantly improved accuracy compared to the existing Arabic numeral recognition methods. The proposed model gives 97.4 percent accuracy, which is the recorded highest accuracy of the dataset used in the experiment. We also propose a modification of the method described in [1], where our method scores identical accuracy as that of [1], with the value of 93.8 percent.

75 citations