scispace - formally typeset
Search or ask a question
Author

Shuxin Pang

Bio: Shuxin Pang is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Lidar & Segmentation. The author has an hindex of 7, co-authored 11 publications receiving 244 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The results showed that the method combing deep leaning and regional growth algorithms was promising in individual maize segmentation, and the values of r, p, and F of the three testing sites with different planting density were all over 0.9.
Abstract: The rapid development of light detection and ranging (Lidar) provides a promising way to obtain three-dimensional (3D) phenotype traits with its high ability of recording accurate 3D laser points. Recently, Lidar has been widely used to obtain phenotype data in the greenhouse and field with along other sensors. Individual maize segmentation is the prerequisite for high throughput phenotype data extraction at individual crop or leaf level, which is still a huge challenge. Deep learning, a state-of-the-art machine learning method, has shown high performance in object detection, classification, and segmentation. In this study, we proposed a method to combine deep leaning and regional growth algorithms to segment individual maize from terrestrial Lidar data. The scanned 3D points of the training site were sliced row and row with a fixed 3D window. Points within the window were compressed into deep images, which were used to train the Faster R-CNN (region-based convolutional neural network) model to learn the ability of detecting maize stem. Three sites of different planting densities were used to test the method. Each site was also sliced into many 3D windows, and the testing deep images were generated. The detected stem in the testing images can be mapped into 3D points, which were used as seed points for the regional growth algorithm to grow individual maize from bottom to up. The results showed that the method combing deep leaning and regional growth algorithms was promising in individual maize segmentation, and the values of r, p, and F of the three testing sites with different planting density were all over 0.9. Moreover, the height of the truly segmented maize was highly correlated to the manually measured height (R2> 0.9). This work shows the possibility of using deep leaning to solve the individual maize segmentation problem from Lidar data.

102 citations

Journal ArticleDOI
TL;DR: A median normalized-vector growth (MNVG) algorithm, which can segment stem and leaf with four steps, i.e., preprocessing, stem growth, leaf growth, and postprocessing, is proposed, which may contribute to the study of LiDAR-based plant phonemics and precise agriculture.
Abstract: Accurate and high throughput extraction of crop phenotypic traits, as a crucial step of molecular breeding, is of great importance for yield increasing. However, automatic stem–leaf segmentation as a prerequisite of many precise phenotypic trait extractions is still a big challenge. Current works focus on the study of the 2-D image-based segmentation, which are sensitive to illumination and occlusion. Light detection and ranging (LiDAR) can obtain accurate 3-D information with its active laser scanning and strong penetration ability, which breaks through phenotyping from 2-D to 3-D. However, few researches have addressed the problem of the LiDAR-based stem–leaf segmentation. In this paper, we proposed a median normalized-vector growth (MNVG) algorithm, which can segment stem and leaf with four steps, i.e., preprocessing, stem growth, leaf growth, and postprocessing. The MNVG method was tested by 30 maize samples with different heights, compactness, leaf numbers, and densities from three growing stages. Moreover, phenotypic traits at leaf, stem, and individual levels were extracted with the truly segmented instances. The mean accuracy of segmentation at point level in terms of the recall, precision, F-score, and overall accuracy were 0.92, 0.93, 0.92, and 0.93, respectively. The accuracy of phenotypic trait extraction in leaf, stem, and individual levels ranged from 0.81 to 0.95, 0.64 to 0.97, and 0.96 to 1, respectively. To our knowledge, this paper proposed the first LiDAR-based stem–leaf segmentation and phenotypic trait extraction method in agriculture field, which may contribute to the study of LiDAR-based plant phonemics and precise agriculture.

78 citations

Journal ArticleDOI
TL;DR: The results demonstrate the feasibility of using terrestrial lidar to monitor 3D maize phenotypes under drought stress in the field and may provide new insights on identifying the key phenotypes and growth stages influenced by drought stress.
Abstract: Maize (Zea mays L.) is the third most consumed grain in the world and improving maize yield is of great importance of the world food security, especially under global climate change and more frequent severe droughts. Due to the limitation of phenotyping methods, most current studies only focused on the responses of phenotypes on certain key growth stages. Although light detection and ranging (lidar) technology showed great potential in acquiring three-dimensional (3D) vegetation information, it has been rarely used in monitoring maize phenotype dynamics at an individual plant level. In this study, we used a terrestrial laser scanner to collect lidar data at six growth stages for 20 maize varieties under drought stress. Three drought-related phenotypes, i.e., plant height, plant area index (PAI) and projected leaf area (PLA), were calculated from the lidar point clouds at the individual plant level. The results showed that terrestrial lidar data can be used to estimate plant height, PAI and PLA at an accuracy of 96%, 70% and 92%, respectively. All three phenotypes showed a pattern of first increasing and then decreasing during the growth period. The high drought tolerance group tended to keep lower plant height and PAI without losing PLA during the tasseling stage. Moreover, the high drought tolerance group inclined to have lower plant area density in the upper canopy than the low drought tolerance group. The results demonstrate the feasibility of using terrestrial lidar to monitor 3D maize phenotypes under drought stress in the field and may provide new insights on identifying the key phenotypes and growth stages influenced by drought stress.

75 citations

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper developed a high-throughput crop phenotyping platform, named Crop 3D, which integrated LiDAR sensor, high-resolution camera, thermal camera and hyperspectral imager.
Abstract: With the growing population and the reducing arable land, breeding has been considered as an effective way to solve the food crisis. As an important part in breeding, high-throughput phenotyping can accelerate the breeding process effectively. Light detection and ranging (LiDAR) is an active remote sensing technology that is capable of acquiring three-dimensional (3D) data accurately, and has a great potential in crop phenotyping. Given that crop phenotyping based on LiDAR technology is not common in China, we developed a high-throughput crop phenotyping platform, named Crop 3D, which integrated LiDAR sensor, high-resolution camera, thermal camera and hyperspectral imager. Compared with traditional crop phenotyping techniques, Crop 3D can acquire multi-source phenotypic data in the whole crop growing period and extract plant height, plant width, leaf length, leaf width, leaf area, leaf inclination angle and other parameters for plant biology and genomics analysis. In this paper, we described the designs, functions and testing results of the Crop 3D platform, and briefly discussed the potential applications and future development of the platform in phenotyping. We concluded that platforms integrating LiDAR and traditional remote sensing techniques might be the future trend of crop high-throughput phenotyping.

70 citations

Journal ArticleDOI
TL;DR: The proposed voxel-based convolutional neural network demonstrated LiDAR’s ability to separate structural components for crop phenotyping using deep learning, which can be useful for other fields.
Abstract: Separating structural components is important but also challenging for plant phenotyping and precision agriculture. Light detection and ranging (LiDAR) technology can potentially overcome these difficulties by providing high quality data. However, there are difficulties in automatically classifying and segmenting components of interest. Deep learning can extract complex features, but it is mostly used with images. Here, we propose a voxel-based convolutional neural network (VCNN) for maize stem and leaf classification and segmentation. Maize plants at three different growth stages were scanned with a terrestrial LiDAR and the voxelized LiDAR data were used as inputs. A total of 3000 individual plants (22 004 leaves and 3000 stems) were prepared for training through data augmentation, and 103 maize plants were used to evaluate the accuracy of classification and segmentation at both instance and point levels. The VCNN was compared with traditional clustering methods ( $K$ -means and density-based spatial clustering of applications with noise), a geometry-based segmentation method, and state-of-the-art deep learning methods (PointNet and PointNet++). The results showed that: 1) at the instance level, the mean accuracy of classification and segmentation (F-score) were 1.00 and 0.96, respectively; 2) at the point level, the mean accuracy of classification and segmentation (F-score) were 0.91 and 0.89, respectively; 3) the VCNN method outperformed traditional clustering methods; and 4) the VCNN was on par with PointNet and PointNet++ in classification, and performed the best in segmentation. The proposed method demonstrated LiDAR’s ability to separate structural components for crop phenotyping using deep learning, which can be useful for other fields.

51 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures.
Abstract: Identifying and characterizing vascular plants in time and space is required in various disciplines, e.g. in forestry, conservation and agriculture. Remote sensing emerged as a key technology revealing both spatial and temporal vegetation patterns. Harnessing the ever growing streams of remote sensing data for the increasing demands on vegetation assessments and monitoring requires efficient, accurate and flexible methods for data analysis. In this respect, the use of deep learning methods is trend-setting, enabling high predictive accuracy, while learning the relevant data features independently in an end-to-end fashion. Very recently, a series of studies have demonstrated that the deep learning method of Convolutional Neural Networks (CNN) is very effective to represent spatial patterns enabling to extract a wide array of vegetation properties from remote sensing imagery. This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing. The main part synthesizes current trends and developments, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures. The literature review showed that CNN can be applied to various problems, including the detection of individual plants or the pixel-wise segmentation of vegetation classes, while numerous studies have evinced that CNN outperform shallow machine learning methods. Several studies suggest that the ability of CNN to exploit spatial patterns particularly facilitates the value of very high spatial resolution data. The modularity in the common deep learning frameworks allows a high flexibility for the adaptation of architectures, whereby especially multi-modal or multi-temporal applications can benefit. An increasing availability of techniques for visualizing features learned by CNNs will not only contribute to interpret but to learn from such models and improve our understanding of remotely sensed signals of vegetation. Although CNN has not been around for long, it seems obvious that they will usher in a new era of vegetation remote sensing.

473 citations

Journal ArticleDOI
TL;DR: Main developments on high-throughput phenotyping in the controlled environments and field conditions as well as for post-harvest yield and quality assessment in past decades are reviewed and the latest multiomics works combining high- throughput phenotypesing and genetic studies are described.

349 citations

Journal ArticleDOI
09 Apr 2020
TL;DR: The goal of this review is to provide a comprehensive overview of the latest studies using deep convolutional neural networks (CNNs) in plant phenotyping applications, and specifically review the use of various CNN architecture for plant stress evaluation, plant development, and postharvest quality assessment.
Abstract: Plant phenotyping has been recognized as a bottleneck for improving the efficiency of breeding programs, understanding plant-environment interactions, and managing agricultural systems. In the past five years, imaging approaches have shown great potential for high-throughput plant phenotyping, resulting in more attention paid to imaging-based plant phenotyping. With this increased amount of image data, it has become urgent to develop robust analytical tools that can extract phenotypic traits accurately and rapidly. The goal of this review is to provide a comprehensive overview of the latest studies using deep convolutional neural networks (CNNs) in plant phenotyping applications. We specifically review the use of various CNN architecture for plant stress evaluation, plant development, and postharvest quality assessment. We systematically organize the studies based on technical developments resulting from imaging classification, object detection, and image segmentation, thereby identifying state-of-the-art solutions for certain phenotyping applications. Finally, we provide several directions for future research in the use of CNN architecture for plant phenotyping purposes.

159 citations

Journal ArticleDOI
TL;DR: Crop yields need to be improved in a sustainable manner to meet the expected worldwide increase in population over the coming decades as well as the effects of anticipated climate change; in this regard, genomics-assisted breeding has become a popular approach to food security.
Abstract: Crop yields need to be improved in a sustainable manner to meet the expected worldwide increase in population over the coming decades as well as the effects of anticipated climate change Recently, genomics-assisted breeding has become a popular approach to food security; in this regard, the crop breeding community must better link the relationships between the phenotype and the genotype While high-throughput genotyping is feasible at a low cost, high-throughput crop phenotyping methods and data analytical capacities need to be improved

129 citations

Journal ArticleDOI
TL;DR: Application of LiDAR, thermal imaging, leaf and canopy spectral reflectance, Chl fluorescence, and machine learning are discussed using wheat and sorghum phenotyping as case studies and a vision of how crop genomics and high-throughput phenotypesing could enable the next generation of crop research and breeding is presented.
Abstract: Plant phenotyping forms the core of crop breeding, allowing breeders to build on physiological traits and mechanistic science to inform their selection of material for crossing and genetic gain. Recent rapid progress in high-throughput techniques based on machine vision, robotics, and computing (plant phenomics) enables crop physiologists and breeders to quantitatively measure complex and previously intractable traits. By combining these techniques with affordable genomic sequencing and genotyping, machine learning, and genome selection approaches, breeders have an opportunity to make rapid genetic progress. This review focuses on how field-based plant phenomics can enable next-generation physiological breeding in cereal crops for traits related to radiation use efficiency, photosynthesis, and crop biomass. These traits have previously been regarded as difficult and laborious to measure but have recently become a focus as cereal breeders find genetic progress from 'Green Revolution' traits such as harvest index become exhausted. Application of LiDAR, thermal imaging, leaf and canopy spectral reflectance, Chl fluorescence, and machine learning are discussed using wheat and sorghum phenotyping as case studies. A vision of how crop genomics and high-throughput phenotyping could enable the next generation of crop research and breeding is presented.

124 citations