scispace - formally typeset
Search or ask a question

Showing papers by "David Rousseau published in 2020"


Journal ArticleDOI
TL;DR: A full pipeline of image processing and machine learning to classify three stages of plant growth plus soil on the different accessions of two species of red clover and alfalfa but which could easily be extended to other crops and other stages of development.
Abstract: Monitoring the timing of seedling emergence and early development via high-throughput phenotyping with computer vision is a challenging topic of high interest in plant science. While most studies focus on the measurements of leaf area index or detection of specific events such as emergence, little attention has been put on the identification of kinetics of events of early seedling development on a seed to seed basis. Imaging systems screened the whole seedling growth process from the top view. Precise annotation of emergence out of the soil, cotyledon opening, and appearance of first leaf was conducted. This annotated data set served to train deep neural networks. Various strategies to incorporate in neural networks, the prior knowledge of the order of the developmental stages were investigated. Best results were obtained with a deep neural network followed with a long short term memory cell, which achieves more than 90% accuracy of correct detection. This work provides a full pipeline of image processing and machine learning to classify three stages of plant growth plus soil on the different accessions of two species of red clover and alfalfa but which could easily be extended to other crops and other stages of development.

34 citations


Journal ArticleDOI
TL;DR: Research efforts of using thermal imaging systems in seed applications are reviewed including estimation of seed viability, detection of fungal growth and insect infections, Detection of seed damage and impurities, seed classification and variety identification.

31 citations


Journal ArticleDOI
TL;DR: The ROSE-X data set is constructed to serve both as training data for supervised learning methods performing organ-level segmentation and as a benchmark to evaluate their performance, and has the potential of becoming a significant resource for future studies on automatic plant phenotyping.
Abstract: The production and availability of annotated data sets are indispensable for training and evaluation of automatic phenotyping methods. The need for complete 3D models of real plants with organ-level labeling is even more pronounced due to the advances in 3D vision-based phenotyping techniques and the difficulty of full annotation of the intricate 3D plant structure. We introduce the ROSE-X data set of 11 annotated 3D models of real rosebush plants acquired through X-ray tomography and presented both in volumetric form and as point clouds. The annotation is performed manually to provide ground truth data in the form of organ labels for the voxels corresponding to the plant shoot. This data set is constructed to serve both as training data for supervised learning methods performing organ-level segmentation and as a benchmark to evaluate their performance. The rosebush models in the data set are of high quality and complex architecture with organs frequently touching each other posing a challenge for the current plant organ segmentation methods. We report leaf/stem segmentation results obtained using four baseline methods. The best performance is achieved by the volumetric approach where local features are trained with a random forest classifier, giving Intersection of Union (IoU) values of 97.93% and 86.23% for leaf and stem classes, respectively. We provided an annotated 3D data set of 11 rosebush plants for training and evaluation of organ segmentation methods. We also reported leaf/stem segmentation results of baseline methods, which are open to improvement. The data set, together with the baseline results, has the potential of becoming a significant resource for future studies on automatic plant phenotyping.

29 citations


Posted Content
TL;DR: In this article, six deep learning architectures that segment 3D point clouds into semantic parts were adapted and compared on the ROSE-X data set, containing fully annotated 3D models of real rosebush plants.
Abstract: Segmentation of structural parts of 3D models of plants is an important step for plant phenotyping, especially for monitoring architectural and morphological traits. This work introduces a benchmark for assessing the performance of 3D point-based deep learning methods on organ segmentation of 3D plant models, specifically rosebush models. Six recent deep learning architectures that segment 3D point clouds into semantic parts were adapted and compared. The methods were tested on the ROSE-X data set, containing fully annotated 3D models of real rosebush plants. The contribution of incorporating synthetic 3D models generated through Lindenmayer systems into training data was also investigated.

16 citations


Journal ArticleDOI
TL;DR: A fully automatic clustering method is proposed to discriminate glioma margin from spectroscopic fluorescent measurements acquired with a recently introduced intraoperative set up and this improves results of margin prediction from healthy tissue by comparison with the standard biomarker-based prediction.
Abstract: Gliomas are infiltrative brain tumors with a margin difficult to identify. 5-ALA induced PpIX fluorescence measurements are a clinical standard, but expert-based classification models still lack sensitivity and specificity. Here a fully automatic clustering method is proposed to discriminate glioma margin. This is obtained from spectroscopic fluorescent measurements acquired with a recently introduced intraoperative set up. We describe a data-driven selection of best spectral features and show how this improves results of margin prediction from healthy tissue by comparison with the standard biomarker-based prediction. This pilot study based on 10 patients and 50 samples shows promising results with a best performance of 77% of accuracy in healthy tissue prediction from margin tissue.

13 citations


Journal ArticleDOI
TL;DR: The problem of final tissue outcome prediction of acute ischemic stroke is assessed from physically realistic simulated perfusion magnetic resonance images and performances close to the state-of-the-art performances are obtained with a patient specific approach.

12 citations


Journal ArticleDOI
27 Jul 2020-Sensors
TL;DR: The value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation is assessed.
Abstract: Since most computer vision approaches are now driven by machine learning, the current bottleneck is the annotation of images. This time-consuming task is usually performed manually after the acquisition of images. In this article, we assess the value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation. This approach is illustrated with apple detection in challenging field conditions. We demonstrate the possibility of high performance in automatic apple segmentation (Dice 0.85), apple counting (88 percent of probability of good detection, and 0.09 true-negative rate), and apple localization (a shift error of fewer than 3 pixels) with eye-tracking systems. This is obtained by simply applying the areas of interest captured by the egocentric devices to standard, non-supervised image segmentation. We especially stress the importance in terms of time of using such eye-tracking devices on head-mounted systems to jointly perform image acquisition and automatic annotation. A gain of time of over 10-fold by comparison with classical image acquisition followed by manual image annotation is demonstrated.

9 citations


Journal ArticleDOI
TL;DR: It is shown that it is possible to learn information directly from the CTIS raw output, by training a neural network to perform binary classification on such images, which is the first application of compressed learning on a simulated CTIS system.
Abstract: The computed tomography imaging spectrometer (CTIS) is a snapshot hyperspectral imaging system. Its output is a 2D image of multiplexed spatiospectral projections of the hyperspectral cube of the scene. Traditionally, the 3D cube is reconstructed from this image before further analysis. In this paper, we show that it is possible to learn information directly from the CTIS raw output, by training a neural network to perform binary classification on such images. The use case we study is an agricultural one, as snapshot imagery is used substantially in this field: the detection of apple scab lesions on leaves. To train the network appropriately and to study several degrees of scab infection, we simulated CTIS images of scabbed leaves. This was made possible with a novel CTIS simulator, where special care was taken to preserve realistic pixel intensities compared to true images. To the best of our knowledge, this is the first application of compressed learning on a simulated CTIS system.

9 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used morphometrical descriptors integrating 2D image features from rotating virtual rose bush videos to predict their visual appearance according to different sensory attributes using real plants cultivated under a shading gradient.
Abstract: Sensory methods applied to ornamental plants enable studying more objectively plant visual quality key drivers of consumer preferences However, management upkeep of a trained panel for sensory profile is time-consuming, not flexible and represents non-negligible costs The present paper achieves the proof of the concept about using morphometrical descriptors upkeep of a trained panel for sensory profile is time-consuming, not flexible and represents non-negligible costs The present paper achieves the proof of the concept about using morphometrical descriptors integrating 2D image features from rotating virtual rose bush videos to predict their visual appearance according to different sensory attributes Using real plants cultivated under a shading gradient and imaged in rotation during three development stages, acceptable prediction error of the sensory attributes ranging from 62 to 198% (normalized RMSEP) were obtained with simple ordinary least squares OLS) regression models and linearization The most accurate model obtained was for the flower quantity perceptionFinally, a secondary analysis highlighted in most of the studied traits a significant influence of defoliation, stressing herefore the impact of the leaves on plant architecture, and thus on the visual appearance

4 citations


Journal ArticleDOI
22 May 2020
TL;DR: This work proposes to address detection of changes in spatial density or in spatial clustering with an individual (pointillist) or collective (textural) approach by comparing their performances according to the size of the impulse response of the microscope, and demonstrates that, for difference detection tasks in single cell microscopy, super-resolve microscopes may not be mandatory and that lower cost, sub-resolved, microscopes can be sufficient.
Abstract: We consider the detection of change in spatial distribution of fluorescent markers inside cells imaged by single cell microscopy. Such problems are important in bioimaging since the density of these markers can reflect the healthy or pathological state of cells, the spatial organization of DNA, or cell cycle stage. With the new super-resolved microscopes and associated microfluidic devices, bio-markers can be detected in single cells individually or collectively as a texture depending on the quality of the microscope impulse response. In this work, we propose, via numerical simulations, to address detection of changes in spatial density or in spatial clustering with an individual (pointillist) or collective (textural) approach by comparing their performances according to the size of the impulse response of the microscope. Pointillist approaches show good performances for small impulse response sizes only, while all textural approaches are found to overcome pointillist approaches with small as well as with large impulse response sizes. These results are validated with real fluorescence microscopy images with conventional resolution. This, a priori non-intuitive result in the perspective of the quest of super-resolution, demonstrates that, for difference detection tasks in single cell microscopy, super-resolved microscopes may not be mandatory and that lower cost, sub-resolved, microscopes can be sufficient.

3 citations


Book ChapterDOI
04 Jun 2020
TL;DR: The potential of a computer vision approach to perform in vitro plant variety tests in a much faster and reproducible way is demonstrated and the benefit of fusing contrasts coming from front and back light is highlighted.
Abstract: Precision agriculture faces challenges related to plant disease detection. Plant phenotyping assesses the appearance to select the best genotypes that resist to varying environmental conditions via plant variety testing. In this process, official plant variety tests are currently performed in vitro by visual inspection of samples placed in a culture media. In this communication, we demonstrate the potential of a computer vision approach to perform such tests in a much faster and reproducible way. We highlight the benefit of fusing contrasts coming from front and back light. To the best of our knowledge, this is illustrated for the first time on the classification of the severity of the presence of a fungi, powdery mildew, on melon leaves with 95% of accuracy.

Proceedings ArticleDOI
01 Feb 2020
TL;DR: Fluorescence spectroscopic study compares an expert-based model assuming that two states of PpIX contribute to total fluorescence and machine learning-based models and shows that machine learning retrieves the main features identified by the expert approach.
Abstract: Gliomas are diffuse brain tumors still hardly curable due to the difficulties to identify margins. 5-ALA induced PpIX fluorescence measurements enable to gain in sensitivity but are still limited to discriminate margin from healthy tissue. In this fluorescence spectroscopic study, we compare an expert-based model assuming that two states of PpIX contribute to total fluorescence and machine learning-based models. We show that machine learning retrieves the main features identified by the expert approach. We also show that machine learning approach slightly overpasses expert-based model for the identification of healthy tissues. These results might help to improve fluorescence-guided resection of gliomas by discriminating healthy tissues from tumor margins.

Proceedings ArticleDOI
09 Nov 2020
TL;DR: This work addresses the question of the detection of spatial organization differences according to fluorescent markers density or distribution using textural features by simulating 3D images and 2D images of the focal plane with various size of the point spread function of the confocal microscope.
Abstract: We consider the problem of sorting cells from fluorescent markers spatial organization via single cell microscopy. This problem is important in bioimaging since this organization can reflect the healthy or pathological state of cells, the chromatin chain configurations or the spatial organization of DNA during main cell cycle. In this work, we address the question of the detection of spatial organization differences according to fluorescent markers density or distribution using textural features. We compare the performances by simulating 3D images and 2D images of the focal plane with various size of the point spread function of the confocal microscope, that is determined by varying the numerical aperture N A of the used objective lens.

Posted Content
TL;DR: In this article, a 3D color point cloud processing pipeline was proposed to count apples on individual apple trees in trellis structured orchards, where point clouds acquired from the leaf-off orchard in winter period were used to delineate tree crowns.
Abstract: We propose a 3D color point cloud processing pipeline to count apples on individual apple trees in trellis structured orchards. Fruit counting at the tree level requires separating trees, which is challenging in dense orchards. We employ point clouds acquired from the leaf-off orchard in winter period, where the branch structure is visible, to delineate tree crowns. We localize apples in point clouds acquired in harvest period. Alignment of the two point clouds enables mapping apple locations to the delineated winter cloud and assigning each apple to its bearing tree. Our apple assignment method achieves an accuracy rate higher than 95%. In addition to presenting a first proof of feasibility, we also provide suggestions for further improvement on our apple assignment pipeline.

Book ChapterDOI
04 Nov 2020
TL;DR: A set of sensors dedicated to various architectural measurements on plants, developed in the framework of a parternship with INRA Angers to answer given practical constraints of cost, size and acquisition time, are presented.
Abstract: We present a set of sensors dedicated to various architectural measurements on plants. These sensors have been developed in the framework of a parternship with INRA Angers to answer given practical constraints of cost, size and acquisition time. We demonstrate the interest of these automated tools in comparison with the current manual approach used by biologists. Mots cles : Capteur sans fil, plateforme Android, tomographie X, plateau tournant, caméra de profondeur.