scispace - formally typeset
Search or ask a question
Author

Lingbo Liu

Bio: Lingbo Liu is an academic researcher from Huazhong University of Science and Technology. The author has contributed to research in topics: Medicine & Panicle. The author has an hindex of 1, co-authored 3 publications receiving 83 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Panicle-SEG was demonstrated to be a robust segmentation algorithm, which can be expanded for different rice accessions, different field environments, different camera angles, different reproductive stages, and indoor rice images, and it creates a new opportunity for nondestructive yield estimation.
Abstract: Rice panicle phenotyping is important in rice breeding, and rice panicle segmentation is the first and key step for image-based panicle phenotyping. Because of the challenge of illumination differentials, panicle shape deformations, rice accession variations, different reproductive stages and the field’s complex background, rice panicle segmentation in the field is a very large challenge. In this paper, we propose a rice panicle segmentation algorithm called Panicle-SEG, which is based on simple linear iterative clustering superpixel regions generation, convolutional neural network classification and entropy rate superpixel optimization. To build the Panicle-SEG-CNN model and test the segmentation effects, 684 training images and 48 testing images were randomly selected, respectively. Six indicators, including Qseg, Sr, SSIM, Precision, Recall and F-measure, are employed to evaluate the segmentation effects, and the average segmentation results for the 48 testing samples are 0.626, 0.730, 0.891, 0.821, 0.730, and 76.73%, respectively. Compared with other segmentation approaches, including HSeg, i2 hysteresis thresholding and jointSeg, the proposed Panicle-SEG algorithm has better performance on segmentation accuracy. Meanwhile, the executing speed is also improved when combined with multithreading and CUDA parallel acceleration. Moreover, Panicle-SEG was demonstrated to be a robust segmentation algorithm, which can be expanded for different rice accessions, different field environments, different camera angles, different reproductive stages, and indoor rice images. The testing dataset and segmentation software are available online. In conclusion, the results demonstrate that Panicle-SEG is a robust method for panicle segmentation, and it creates a new opportunity for nondestructive yield estimation.

134 citations

Journal ArticleDOI
TL;DR: In this article , a method for 3D panicle modeling of large numbers of rice plants is presented. But this method does not focus on specified parts of a target object, and it is not suitable for the case of large number of rice panicles.
Abstract: Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits. Such interference can be largely eliminated if panicles are phenotyped at the 3D level. Research on 3D panicle phenotyping has been limited. Given that existing 3D modeling techniques do not focus on specified parts of a target object, an efficient method for panicle modeling of large numbers of rice plants is lacking. This paper presents an automatic and nondestructive method for 3D panicle modeling. The proposed method integrates shoot rice reconstruction with shape from silhouette, 2D panicle segmentation with a deep convolutional neural network, and 3D panicle segmentation with ray tracing and supervoxel clustering. A multiview imaging system was built to acquire image sequences of rice canopies with an efficiency of approximately 4 min per rice plant. The execution time of panicle modeling per rice plant using 90 images was approximately 26 min. The outputs of the algorithm for a single rice plant are a shoot rice model, surface shoot rice model, panicle model, and surface panicle model, all represented by a list of spatial coordinates. The efficiency and performance were evaluated and compared with the classical structure-from-motion algorithm. The results demonstrated that the proposed method is well qualified to recover the 3D shapes of rice panicles from multiview images and is readily adaptable to rice plants of diverse accessions and growth stages. The proposed algorithm is superior to the structure-from-motion method in terms of texture preservation and computational efficiency. The sample images and implementation of the algorithm are available online. This automatic, cost-efficient, and nondestructive method of 3D panicle modeling may be applied to high-throughput 3D phenotyping of large rice populations.

7 citations

Book ChapterDOI
27 Sep 2015
TL;DR: A feasible method for rapid identification of rice varieties was developed and could be integrated into new knowledge in developing computer vision systems used in automated rice-evaluated system.
Abstract: Rice is the major food of approximately half world population and thousands of rice varieties are planted in the world. The identification of rice varieties is of great significance, especially to the breeders. In this study, a feasible method for rapid identification of rice varieties was developed. For each rice variety, rice grains per plant were imaged and analyzed to acquire grain shape features and a weighing device was used to obtain the yield-related parameters. Then, a Support Vector Machine (SVM) classifier was employed to discriminate the rice varieties by these features. The average accuracy for the grain traits extraction is 98.41 %, and the average accuracy for the SVM classifier is 79.74 % by using cross validation. The results demonstrated that this method could yield an accurate identification of rice varieties and could be integrated into new knowledge in developing computer vision systems used in automated rice-evaluated system.

2 citations

Journal ArticleDOI
TL;DR: In this paper , a deeplabv3+ model with an Xception backbone was used to segment the sectional image of coconut fruits and seeds automatically and non-destructively to acquire the 3D model and phenotyping traits.
Abstract: With the completion of the coconut gene map and the gradual improvement of related molecular biology tools, molecular marker-assisted breeding of coconut has become the next focus of coconut breeding, and accurate coconut phenotypic traits measurement will provide technical support for screening and identifying the correspondence between genotype and phenotype. A Micro-CT system was developed to measure coconut fruits and seeds automatically and nondestructively to acquire the 3D model and phenotyping traits. A deeplabv3+ model with an Xception backbone was used to segment the sectional image of coconut fruits and seeds automatically. Compared with the structural-light system measurement, the mean absolute percentage error of the fruit volume and surface area measurements by the Micro-CT system was 1.87% and 2.24%, respectively, and the squares of the correlation coefficients were 0.977 and 0.964, respectively. In addition, compared with the manual measurements, the mean absolute percentage error of the automatic copra weight and total biomass measurements was 8.85% and 25.19%, respectively, and the adjusted squares of the correlation coefficients were 0.922 and 0.721, respectively. The Micro-CT system can nondestructively obtain up to 21 agronomic traits and 57 digital traits precisely.

Cited by
More filters
Journal ArticleDOI
TL;DR: A comparative assessment of DL tools against other existing techniques, with respect to decision accuracy, data size requirement, and applicability in various scenarios is provided.

350 citations

Journal ArticleDOI
TL;DR: The aim of this paper is to review the most recent work in the application of machine vision to agriculture, mainly for crop farming, to serve as a research guide for the researcher and practitioner alike in applying cognitive technology to agriculture.
Abstract: Machine vision for precision agriculture has attracted considerable research interest in recent years. The aim of this paper is to review the most recent work in the application of machine vision to agriculture, mainly for crop farming. This study can serve as a research guide for the researcher and practitioner alike in applying cognitive technology to agriculture. Studies of different agricultural activities that support crop harvesting are reviewed, such as fruit grading, fruit counting, and yield estimation. Moreover, plant health monitoring approaches are addressed, including weed, insect, and disease detection. Finally, recent research efforts considering vehicle guidance systems and agricultural harvesting robots are also reviewed.

129 citations

Journal ArticleDOI
27 Jun 2019
TL;DR: It is demonstrated that it is possible to significantly reduce human labeling effort without compromising final model performance by using a semitrained CNN model (i.e., trained with limited labeled data) to perform synthetic annotation.
Abstract: The yield of cereal crops such as sorghum (Sorghum bicolor L. Moench) depends on the distribution of crop-heads in varying branching arrangements. Therefore, counting the head number per unit area is critical for plant breeders to correlate with the genotypic variation in a specific breeding field. However, measuring such phenotypic traits manually is an extremely labor-intensive process and suffers from low efficiency and human errors. Moreover, the process is almost infeasible for large-scale breeding plantations or experiments. Machine learning-based approaches like deep convolutional neural network (CNN) based object detectors are promising tools for efficient object detection and counting. However, a significant limitation of such deep learning-based approaches is that they typically require a massive amount of hand-labeled images for training, which is still a tedious process. Here, we propose an active learning inspired weakly supervised deep learning framework for sorghum head detection and counting from UAV-based images. We demonstrate that it is possible to significantly reduce human labeling effort without compromising final model performance ( between human count and machine count is 0.88) by using a semitrained CNN model (i.e., trained with limited labeled data) to perform synthetic annotation. In addition, we also visualize key features that the network learns. This improves trustworthiness by enabling users to better understand and trust the decisions that the trained deep learning model makes.

123 citations

Journal ArticleDOI
TL;DR: The results showed that the method combing deep leaning and regional growth algorithms was promising in individual maize segmentation, and the values of r, p, and F of the three testing sites with different planting density were all over 0.9.
Abstract: The rapid development of light detection and ranging (Lidar) provides a promising way to obtain three-dimensional (3D) phenotype traits with its high ability of recording accurate 3D laser points. Recently, Lidar has been widely used to obtain phenotype data in the greenhouse and field with along other sensors. Individual maize segmentation is the prerequisite for high throughput phenotype data extraction at individual crop or leaf level, which is still a huge challenge. Deep learning, a state-of-the-art machine learning method, has shown high performance in object detection, classification, and segmentation. In this study, we proposed a method to combine deep leaning and regional growth algorithms to segment individual maize from terrestrial Lidar data. The scanned 3D points of the training site were sliced row and row with a fixed 3D window. Points within the window were compressed into deep images, which were used to train the Faster R-CNN (region-based convolutional neural network) model to learn the ability of detecting maize stem. Three sites of different planting densities were used to test the method. Each site was also sliced into many 3D windows, and the testing deep images were generated. The detected stem in the testing images can be mapped into 3D points, which were used as seed points for the regional growth algorithm to grow individual maize from bottom to up. The results showed that the method combing deep leaning and regional growth algorithms was promising in individual maize segmentation, and the values of r, p, and F of the three testing sites with different planting density were all over 0.9. Moreover, the height of the truly segmented maize was highly correlated to the manually measured height (R2> 0.9). This work shows the possibility of using deep leaning to solve the individual maize segmentation problem from Lidar data.

102 citations

Journal ArticleDOI
TL;DR: A framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system and a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants are provided.
Abstract: The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant's phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field.

94 citations