scispace - formally typeset
Search or ask a question
Author

Shang Gao

Bio: Shang Gao is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Canopy & Panicle. The author has an hindex of 6, co-authored 7 publications receiving 188 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The results showed that the method combing deep leaning and regional growth algorithms was promising in individual maize segmentation, and the values of r, p, and F of the three testing sites with different planting density were all over 0.9.
Abstract: The rapid development of light detection and ranging (Lidar) provides a promising way to obtain three-dimensional (3D) phenotype traits with its high ability of recording accurate 3D laser points. Recently, Lidar has been widely used to obtain phenotype data in the greenhouse and field with along other sensors. Individual maize segmentation is the prerequisite for high throughput phenotype data extraction at individual crop or leaf level, which is still a huge challenge. Deep learning, a state-of-the-art machine learning method, has shown high performance in object detection, classification, and segmentation. In this study, we proposed a method to combine deep leaning and regional growth algorithms to segment individual maize from terrestrial Lidar data. The scanned 3D points of the training site were sliced row and row with a fixed 3D window. Points within the window were compressed into deep images, which were used to train the Faster R-CNN (region-based convolutional neural network) model to learn the ability of detecting maize stem. Three sites of different planting densities were used to test the method. Each site was also sliced into many 3D windows, and the testing deep images were generated. The detected stem in the testing images can be mapped into 3D points, which were used as seed points for the regional growth algorithm to grow individual maize from bottom to up. The results showed that the method combing deep leaning and regional growth algorithms was promising in individual maize segmentation, and the values of r, p, and F of the three testing sites with different planting density were all over 0.9. Moreover, the height of the truly segmented maize was highly correlated to the manually measured height (R2> 0.9). This work shows the possibility of using deep leaning to solve the individual maize segmentation problem from Lidar data.

102 citations

Journal ArticleDOI
TL;DR: A median normalized-vector growth (MNVG) algorithm, which can segment stem and leaf with four steps, i.e., preprocessing, stem growth, leaf growth, and postprocessing, is proposed, which may contribute to the study of LiDAR-based plant phonemics and precise agriculture.
Abstract: Accurate and high throughput extraction of crop phenotypic traits, as a crucial step of molecular breeding, is of great importance for yield increasing. However, automatic stem–leaf segmentation as a prerequisite of many precise phenotypic trait extractions is still a big challenge. Current works focus on the study of the 2-D image-based segmentation, which are sensitive to illumination and occlusion. Light detection and ranging (LiDAR) can obtain accurate 3-D information with its active laser scanning and strong penetration ability, which breaks through phenotyping from 2-D to 3-D. However, few researches have addressed the problem of the LiDAR-based stem–leaf segmentation. In this paper, we proposed a median normalized-vector growth (MNVG) algorithm, which can segment stem and leaf with four steps, i.e., preprocessing, stem growth, leaf growth, and postprocessing. The MNVG method was tested by 30 maize samples with different heights, compactness, leaf numbers, and densities from three growing stages. Moreover, phenotypic traits at leaf, stem, and individual levels were extracted with the truly segmented instances. The mean accuracy of segmentation at point level in terms of the recall, precision, F-score, and overall accuracy were 0.92, 0.93, 0.92, and 0.93, respectively. The accuracy of phenotypic trait extraction in leaf, stem, and individual levels ranged from 0.81 to 0.95, 0.64 to 0.97, and 0.96 to 1, respectively. To our knowledge, this paper proposed the first LiDAR-based stem–leaf segmentation and phenotypic trait extraction method in agriculture field, which may contribute to the study of LiDAR-based plant phonemics and precise agriculture.

78 citations

Journal ArticleDOI
TL;DR: The proposed voxel-based convolutional neural network demonstrated LiDAR’s ability to separate structural components for crop phenotyping using deep learning, which can be useful for other fields.
Abstract: Separating structural components is important but also challenging for plant phenotyping and precision agriculture. Light detection and ranging (LiDAR) technology can potentially overcome these difficulties by providing high quality data. However, there are difficulties in automatically classifying and segmenting components of interest. Deep learning can extract complex features, but it is mostly used with images. Here, we propose a voxel-based convolutional neural network (VCNN) for maize stem and leaf classification and segmentation. Maize plants at three different growth stages were scanned with a terrestrial LiDAR and the voxelized LiDAR data were used as inputs. A total of 3000 individual plants (22 004 leaves and 3000 stems) were prepared for training through data augmentation, and 103 maize plants were used to evaluate the accuracy of classification and segmentation at both instance and point levels. The VCNN was compared with traditional clustering methods ( $K$ -means and density-based spatial clustering of applications with noise), a geometry-based segmentation method, and state-of-the-art deep learning methods (PointNet and PointNet++). The results showed that: 1) at the instance level, the mean accuracy of classification and segmentation (F-score) were 1.00 and 0.96, respectively; 2) at the point level, the mean accuracy of classification and segmentation (F-score) were 0.91 and 0.89, respectively; 3) the VCNN method outperformed traditional clustering methods; and 4) the VCNN was on par with PointNet and PointNet++ in classification, and performed the best in segmentation. The proposed method demonstrated LiDAR’s ability to separate structural components for crop phenotyping using deep learning, which can be useful for other fields.

51 citations

Journal ArticleDOI
TL;DR: The newly identified trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.
Abstract: Identification and characterization of new traits with sound physiological foundation is essential for crop breeding and production management. Deep learning has been widely used in image data analysis to explore spatial and temporal information on crop growth and development, thus strengthening the power of identification of physiological traits. Taking the advantage of deep learning, this study aims to develop a novel trait of canopy structure that integrate source and sink in japonica rice. We applied a deep learning approach to accurately segment leaf and panicle, and subsequently developed the procedure of GvCrop to calculate the leaf to panicle ratio (LPR) of rice canopy during grain filling stage. Images of training dataset were captured in the field experiments, with large variations in camera shooting angle, the elevation and the azimuth angles of the sun, rice genotype, and plant phenological stages. Accurately labeled by manually annotating the panicle and leaf regions, the resulting dataset were used to train FPN-Mask (Feature Pyramid Network Mask) models, consisting of a backbone network and a task-specific sub-network. The model with the highest accuracy was then selected to check the variations in LPR among 192 rice germplasms and among agronomical practices. Despite the challenging field conditions, FPN-Mask models achieved a high detection accuracy, with Pixel Accuracy being 0.99 for panicles and 0.98 for leaves. The calculated LPR displayed large spatial and temporal variations as well as genotypic differences. In addition, it was responsive to agronomical practices such as nitrogen fertilization and spraying of plant growth regulators. Deep learning technique can achieve high accuracy in simultaneous detection of panicle and leaf data from complex rice field images. The proposed FPN-Mask model is applicable to detect and quantify crop performance under field conditions. The newly identified trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.

36 citations

Journal ArticleDOI
TL;DR: In this paper, it has been found that the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) is systematically higher than the actual land surface in vegetated areas.
Abstract: It has been found that the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) is systematically higher than the actual land surface in vegetated areas. This study developed a new...

34 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures.
Abstract: Identifying and characterizing vascular plants in time and space is required in various disciplines, e.g. in forestry, conservation and agriculture. Remote sensing emerged as a key technology revealing both spatial and temporal vegetation patterns. Harnessing the ever growing streams of remote sensing data for the increasing demands on vegetation assessments and monitoring requires efficient, accurate and flexible methods for data analysis. In this respect, the use of deep learning methods is trend-setting, enabling high predictive accuracy, while learning the relevant data features independently in an end-to-end fashion. Very recently, a series of studies have demonstrated that the deep learning method of Convolutional Neural Networks (CNN) is very effective to represent spatial patterns enabling to extract a wide array of vegetation properties from remote sensing imagery. This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing. The main part synthesizes current trends and developments, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures. The literature review showed that CNN can be applied to various problems, including the detection of individual plants or the pixel-wise segmentation of vegetation classes, while numerous studies have evinced that CNN outperform shallow machine learning methods. Several studies suggest that the ability of CNN to exploit spatial patterns particularly facilitates the value of very high spatial resolution data. The modularity in the common deep learning frameworks allows a high flexibility for the adaptation of architectures, whereby especially multi-modal or multi-temporal applications can benefit. An increasing availability of techniques for visualizing features learned by CNNs will not only contribute to interpret but to learn from such models and improve our understanding of remotely sensed signals of vegetation. Although CNN has not been around for long, it seems obvious that they will usher in a new era of vegetation remote sensing.

473 citations

Journal ArticleDOI
09 Apr 2020
TL;DR: The goal of this review is to provide a comprehensive overview of the latest studies using deep convolutional neural networks (CNNs) in plant phenotyping applications, and specifically review the use of various CNN architecture for plant stress evaluation, plant development, and postharvest quality assessment.
Abstract: Plant phenotyping has been recognized as a bottleneck for improving the efficiency of breeding programs, understanding plant-environment interactions, and managing agricultural systems. In the past five years, imaging approaches have shown great potential for high-throughput plant phenotyping, resulting in more attention paid to imaging-based plant phenotyping. With this increased amount of image data, it has become urgent to develop robust analytical tools that can extract phenotypic traits accurately and rapidly. The goal of this review is to provide a comprehensive overview of the latest studies using deep convolutional neural networks (CNNs) in plant phenotyping applications. We specifically review the use of various CNN architecture for plant stress evaluation, plant development, and postharvest quality assessment. We systematically organize the studies based on technical developments resulting from imaging classification, object detection, and image segmentation, thereby identifying state-of-the-art solutions for certain phenotyping applications. Finally, we provide several directions for future research in the use of CNN architecture for plant phenotyping purposes.

159 citations

Journal ArticleDOI
TL;DR: In this article, a review of existing deep learning-based weed detection and classification techniques is presented, which includes data acquisition, dataset preparation, DL techniques employed for detection, location and classification of weeds in crops, and evaluation metrics approaches.

128 citations

Journal ArticleDOI
TL;DR: Application of LiDAR, thermal imaging, leaf and canopy spectral reflectance, Chl fluorescence, and machine learning are discussed using wheat and sorghum phenotyping as case studies and a vision of how crop genomics and high-throughput phenotypesing could enable the next generation of crop research and breeding is presented.
Abstract: Plant phenotyping forms the core of crop breeding, allowing breeders to build on physiological traits and mechanistic science to inform their selection of material for crossing and genetic gain. Recent rapid progress in high-throughput techniques based on machine vision, robotics, and computing (plant phenomics) enables crop physiologists and breeders to quantitatively measure complex and previously intractable traits. By combining these techniques with affordable genomic sequencing and genotyping, machine learning, and genome selection approaches, breeders have an opportunity to make rapid genetic progress. This review focuses on how field-based plant phenomics can enable next-generation physiological breeding in cereal crops for traits related to radiation use efficiency, photosynthesis, and crop biomass. These traits have previously been regarded as difficult and laborious to measure but have recently become a focus as cereal breeders find genetic progress from 'Green Revolution' traits such as harvest index become exhausted. Application of LiDAR, thermal imaging, leaf and canopy spectral reflectance, Chl fluorescence, and machine learning are discussed using wheat and sorghum phenotyping as case studies. A vision of how crop genomics and high-throughput phenotyping could enable the next generation of crop research and breeding is presented.

124 citations

Journal ArticleDOI
TL;DR: Computer vision-based phenotyping will play significant roles in both the nowcasting and forecasting of plant traits through modeling of genotype/phenotype relationships.
Abstract: Employing computer vision to extract useful information from images and videos is becoming a key technique for identifying phenotypic changes in plants. Here, we review the emerging aspects of computer vision for automated plant phenotyping. Recent advances in image analysis empowered by machine learning-based techniques, including convolutional neural network-based modeling, have expanded their application to assist high-throughput plant phenotyping. Combinatorial use of multiple sensors to acquire various spectra has allowed us to noninvasively obtain a series of datasets, including those related to the development and physiological responses of plants throughout their life. Automated phenotyping platforms accelerate the elucidation of gene functions associated with traits in model plants under controlled conditions. Remote sensing techniques with image collection platforms, such as unmanned vehicles and tractors, are also emerging for large-scale field phenotyping for crop breeding and precision agriculture. Computer vision-based phenotyping will play significant roles in both the nowcasting and forecasting of plant traits through modeling of genotype/phenotype relationships.

108 citations