scispace - formally typeset

Book ChapterDOI

A Review on Agricultural Advancement Based on Computer Vision and Machine Learning

01 Jan 2020-pp 567-581

TL;DR: This review paper gives an overview of machine learning and computer vision techniques which are inherently associated with this domain and tries to give an analysis, which can help researchers to look at some relevant problems in the context of India.

AbstractThe importance of agriculture in modern society need not be overstated. In order to meet the huge requirements of food and to mitigate, the conventional problems of cropping smart and sustainable agriculture have emerged over the conventional agriculture. From computational perspective, computer vision and machine learning techniques have been applied in many aspects of human and social life, and agriculture is not also an exception. This review paper gives an overview of machine learning and computer vision techniques which are inherently associated with this domain. A summary of the works highlighting different seeds, crops, fruits with the country is also enclosed. The paper also tries to give an analysis, which can help researchers to look at some relevant problems in the context of India.

...read more


Citations
More filters
Journal ArticleDOI
TL;DR: The purpose of this review is to summarize the progress made on automatic traps with a particular focus on camera-equipped traps to support the use of software and image recognition algorithms to identify and/or count insect species from pictures.
Abstract: Integrated pest management relies on insect pest monitoring to support the decision of counteracting a given level of infestation and to select the adequate control method. The classic monitoring approach of insect pests is based on placing in single infested areas a series of traps that are checked by human operators on a temporal basis. This strategy requires high labor cost and provides poor spatial and temporal resolution achievable by single operators. The adoption of image sensors to monitor insect pests can result in several practical advantages. The purpose of this review is to summarize the progress made on automatic traps with a particular focus on camera-equipped traps. The use of software and image recognition algorithms can support automatic trap usage to identify and/or count insect species from pictures. Considering the high image resolution achievable and the opportunity to exploit data transfer systems through wireless technology, it is possible to have remote control of insect captures, limiting field visits. The availability of real-time and on-line pest monitoring systems from a distant location opens the opportunity for measuring insect population dynamics constantly and simultaneously in a large number of traps with a limited human labor requirement. The actual limitations are the high cost, the low power autonomy and the low picture quality of some prototypes together with the need for further improvements in fully automated pest detection. Limits and benefits resulting from several case studies are examined with a perspective for the future development of technology-driven insect pest monitoring and management.

11 citations

Journal ArticleDOI
TL;DR: A new method to rapidly assess the severity of FHB and evaluate the efficacy of fungicide application programs and the results show that the segmentation algorithm could segment wheat ears from a complex field background and the counting algorithm could effectively solve the problem of wheat ear adhesion and occlusion.
Abstract: Fusarium head blight (FHB) is one of the most important diseases in wheat worldwide. Evaluation and identification of effective fungicides are essential for control of FHB. However, traditional methods based on the manual disease severity assessment to evaluate the efficacy of fungicides are time-consuming and laborsome. In this study, we developed a new method to rapidly assess the severity of FHB and evaluate the efficacy of fungicide application programs. Enhanced red-green-green (RGG) images were processed from acquired raw red-green-blue (RGB) images of wheat ear samples; the images were transformed in color spaces through K-means clustering for rough segmentation of wheat ears; a random forest classifier was used with features of color, texture, geometry and vegetation index for fine segmentation of disease spots in wheat ears; a newly proposed width mutation counting algorithm was used to count wheat ears; and the disease severity of the wheat ears groups was graded and the efficacy of six fungicides was evaluated. The results show that the segmentation algorithm could segment wheat ears from a complex field background. And the counting algorithm could effectively solve the problem of wheat ear adhesion and occlusion. The average counting accuracy of all and diseased wheat ears were 93.00% and 92.64%, respectively, with the coefficients of determination (R 2 ) of 0.90 and 0.98, and the root mean square error (RMSE) of 10.56 and 7.52, respectively. The new method could accurately assess the diseased levels of wheat eat groups infected by FHB and determine the efficacy of the six fungicides evaluated. The results demonstrate a potential of using digital imaging technology to evaluate and identify effective fungicides for control of the FHB disease in wheat and other crop diseases.

7 citations


Additional excerpts

  • ...Through literature research it was found that image processing alone and its integration with machine learning are commonly used to achieve the two steps [12]–[14]....

    [...]

Book ChapterDOI
10 Sep 2019
TL;DR: The present article deals with the above-mentioned method of deep learning, and especially with its application when recognizing certain objects and elements during the visual product inspection.
Abstract: Nowadays, when high industrial productivity is connected with high quality and low product faults, it is common practice to use 100% product quality control. Since the quantities of products are high in mass production and inspection time must be as low as possible, the solution may be to use visual inspection of finished parts via camera systems and subsequent image processing using artificial intelligence. Recently, deep learning has shown itself to be the most appropriate and effective method for this purpose. The present article deals with the above-mentioned method of deep learning, and especially with its application when recognizing certain objects and elements during the visual product inspection.

6 citations

Journal ArticleDOI
TL;DR: Pearprocess is a new cost-effective web-application for semi-automated quantification of two-dimensional phenotypic traits from digital imagery using an easy imaging protocol and is a promising new tool for use in evaluating future germplasms for crop breeding programs.
Abstract: The content of stone cells is an important factor for pear breeding as a high content indicates severely reduced fruit quality in terms of fruit taste. Although the frozen-HCl method is currently a common method used to evaluate stone cell content in pears, it is limited in incomplete separation of stone cell and pulp and is time consuming and complicated. Computer-aided research is a promising strategy in modern scientific research for phenotypic data collection and is increasingly used in studying crops. Thus far, we lack a quantitative tool that can effectively determine stone cell content in pear fruit. We developed a program, Pearprocess, based on an imaging protocol using computer vision and image processing algorithms applied to digital images. Using photos of hand-cut sections of pear fruit stained with phloroglucin-HCl (Wiesner's reagent), Pearprocess can extract and analyze image-based data to quantify the stone cell-related traits measured in this study: number, size, area and density of stone cell. We quantified these traits for 395 pear accessions by Pearprocess and revealed large variation in different pear varieties and species. The number of stone cells varied greatly from value of 138 to 2 866, the density of stone cells ranged from 0.0019 to 0.0632 cm2 cm−2, the distribution of stone cell area ranged from 0.06 to 2.02 cm2 and the stone cell size was between 2e-4 and 1e-3 cm2. Moreover, trait data were correlated with fruit taste data. We found that stone cell density is likely the most important factor affecting the taste of pear fruit. In summary, Pearprocess is a new cost-effective web-application for semi-automated quantification of two-dimensional phenotypic traits from digital imagery using an easy imaging protocol. This simpler, feasible and accurate method to evaluate stone cell traits of fruit is a promising new tool for use in evaluating future germplasms for crop breeding programs.

2 citations

Journal ArticleDOI
28 Jan 2021
Abstract: Machine learning (ML) and its multiple applications have comparative advantages for improving the interpretation of knowledge on different agricultural processes. However, there are challenges that impede proper usage, as can be seen in phenotypic characterizations of germplasm banks. The objective of this research was to test and optimize different analysis methods based on ML for the prioritization and selection of morphological descriptors of Rubus spp. 55 descriptors were evaluated in 26 genotypes and the weight of each one and its ability to discriminating capacity was determined. ML methods as random forest (RF), support vector machines, in the linear and radial forms, and neural networks were optimized and compared. Subsequently, the results were validated with two discriminating methods and their variants: hierarchical agglomerative clustering and K-means. The results indicated that RF presented the highest accuracy (0.768) of the methods evaluated, selecting 11 descriptors based on the purity (Gini index), importance, number of connected trees, and significance (p value < 0.05). Additionally, K-means method with optimized descriptors based on RF had greater discriminating power on Rubus spp., accessions according to evaluated statistics. This study presents one application of ML for the optimization of specific morphological variables for plant germplasm bank characterization.

1 citations


References
More filters
Journal ArticleDOI
TL;DR: A survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges indicates that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.
Abstract: Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.

1,128 citations

Journal ArticleDOI
Abstract: In this paper, convolutional neural network models were developed to perform plant disease detection and diagnosis using simple leaves images of healthy and diseased plants, through deep learning methodologies. Training of the models was performed with the use of an open database of 87,848 images, containing 25 different plants in a set of 58 distinct classes of [plant, disease] combinations, including healthy plants. Several model architectures were trained, with the best performance reaching a 99.53% success rate in identifying the corresponding [plant, disease] combination (or healthy plant). The significantly high success rate makes the model a very useful advisory or early warning tool, and an approach that could be further expanded to support an integrated plant disease identification system to operate in real cultivation conditions.

690 citations

Journal ArticleDOI
TL;DR: A procedure for the early detection and differentiation of sugar beet diseases based on Support Vector Machines and spectral vegetation indices to discriminate diseased from non-diseased sugar beet leaves and to identify diseases even before specific symptoms became visible.
Abstract: Automatic methods for an early detection of plant diseases are vital for precision crop protection. The main contribution of this paper is a procedure for the early detection and differentiation of sugar beet diseases based on Support Vector Machines and spectral vegetation indices. The aim was (I) to discriminate diseased from non-diseased sugar beet leaves, (II) to differentiate between the diseases Cercospora leaf spot, leaf rust and powdery mildew, and (III) to identify diseases even before specific symptoms became visible. Hyperspectral data were recorded from healthy leaves and leaves inoculated with the pathogens Cercospora beticola, Uromyces betae or Erysiphe betae causing Cercospora leaf spot, sugar beet rust and powdery mildew, respectively for a period of 21 days after inoculation. Nine spectral vegetation indices, related to physiological parameters were used as features for an automatic classification. Early differentiation between healthy and inoculated plants as well as among specific diseases can be achieved by a Support Vector Machine with a radial basis function as kernel. The discrimination between healthy sugar beet leaves and diseased leaves resulted in classification accuracies up to 97%. The multiple classification between healthy leaves and leaves with symptoms of the three diseases still achieved an accuracy higher than 86%. Furthermore the potential of presymptomatic detection of the plant diseases was demonstrated. Depending on the type and stage of disease the classification accuracy was between 65% and 90%.

519 citations

Journal ArticleDOI
04 Sep 2017-Sensors
TL;DR: A deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions, and combines each of these meta-architectures with “deep feature extractors” such as VGG net and Residual Network.
Abstract: Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called "deep learning meta-architectures". We combine each of these meta-architectures with "deep feature extractors" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.

448 citations

Journal ArticleDOI
TL;DR: This work provides a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits.
Abstract: Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits.

444 citations