scispace - formally typeset
Search or ask a question

Showing papers by "Peter Bajcsy published in 2019"


Proceedings ArticleDOI
16 Jun 2019
TL;DR: This goal is to compare the accuracy gains of CNN-based segmentation by using un-annotated images via Generative Adversarial Networks (GAN), annotated out-of-bio-domain images via transfer learning, and a priori knowledge about microscope imaging mapped into geometric augmentations of a small collection of annotated images.
Abstract: We address the problem of segmenting cell contours from microscopy images of human induced pluripotent Retinal Pigment Epithelial stem cells (iRPE) using Convolutional Neural Networks (CNN). Our goal is to compare the accuracy gains of CNN-based segmentation by using (1) un-annotated images via Generative Adversarial Networks (GAN), (2) annotated out-of-bio-domain images via transfer learning, and (3) a priori knowledge about microscope imaging mapped into geometric augmentations of a small collection of annotated images. First, the GAN learns an abstract representation of cell objects. Next, this unsupervised learned representation is transferred to the CNN segmentation models which are further fine-tuned on a small number of manually segmented iRPE cell images. Second, transfer learning is applied by pre-training a part of the CNN segmentation model with the COCO dataset containing semantic segmentation labels. The CNN model is then adapted to the iRPE cell domain using a small set of annotated iRPE cell images. Third, augmentations based on geometrical transformations are applied to a small collection of annotated images. All these approaches to training CNN-based segmentation model are compared to a baseline CNN model trained on a small collection of annotated images. For very small annotation counts, the results show accuracy improvements up to 20 % by the best approach in comparison to the accuracy achieved using a baseline U-Net model. For larger annotation counts these approaches asymptotically approach the same accuracy.

47 citations


Posted Content
TL;DR: In this paper, the authors provide an overview of the nature of microscopy metadata and its importance for fostering data quality, reproducibility, scientific rigor, and sharing value in light microscopy.
Abstract: The application of microscopy in biomedical research has come a long way since Antonie van Leeuwenhoek discovered unicellular organisms. Countless innovations have positioned light microscopy as a cornerstone of modern biology and a method of choice for connecting omics datasets to their biological and clinical correlates. Still, regardless of how convincing published imaging data looks, it does not always convey meaningful information about the conditions in which it was acquired, processed, and analyzed. Adequate record-keeping, reporting, and quality control are therefore essential to ensure experimental rigor and data fidelity, allow experiments to be reproducibly repeated, and promote the proper evaluation, interpretation, comparison, and re-use. To this end, microscopy images should be accompanied by complete descriptions detailing experimental procedures, biological samples, microscope hardware specifications, image acquisition parameters, and image analysis procedures, as well as metrics accounting for instrument performance and calibration. However, universal, community-accepted Microscopy Metadata standards and reporting specifications that would result in Findable Accessible Interoperable and Reproducible (FAIR) microscopy data have not yet been established. To understand this shortcoming and to propose a way forward, here we provide an overview of the nature of microscopy metadata and its importance for fostering data quality, reproducibility, scientific rigor, and sharing value in light microscopy. The proposal for tiered Microscopy Metadata Specifications that extend the OME Data Model put forth by the 4D Nucleome Initiative and by Bioimaging North America [1-3] as well as a suite of three complementary and interoperable tools are being developed to facilitate the process of image data documentation and are presented in related manuscripts [4-6].

3 citations


Posted ContentDOI
24 Aug 2019-bioRxiv
TL;DR: A statistical method for assessing the differences in ratings of pluripotent stem cells by two different experts is explored and helps to establish confidence in the ratings and the criteria for ratings, even when the experts disagree.
Abstract: The visual inspection of pluripotent stem cell colonies by microscopy is widely used as a primary method to assess the quality of the preparations and degree of pluripotency. The lack of ground truth and the possible inconsistency of evaluations from multiple experts within and between stem cell laboratories are sources of uncertainty about the state of the cells, the reproducibility of preparations, and the efficiency of expansion protocols. To examine how to evaluate the level of confidence one has in disparate rating from experts, we explored a statistical method for assessing the differences in ratings of pluripotent stem cells by two different experts. Two experts rated phase contrast microscope images of human embryonic stem cell (hESC) colonies on a scale of 1 (poor) to 5 (maximum pluripotency character) but agreed with one another only 48% of the time. To assess whether experts used similar criteria to rate colonies, we developed custom image feature algorithms based on the stated visual criteria provided by the experts for selection of colonies. These features, plus others, were then used to develop pluripotency scoring algorithms trained to reflect ratings of both experts. We treated expert ratings as inexact indicators of a continuous pluripotency score and considered the inconsistency between expert ratings in developing our models. The model suggests that the two experts use somewhat different scales for discriminating between colony quality. Covariance analysis indicated that both experts use features that are not included in the model. Two image features, colony perimeter and a feature based on texture, were the most important for both experts for predicting the ratings. Interestingly, colony perimeter was not one of the expert-provided criteria for rating colonies, showing that this modeling approach allowed identification of features that the experts were not aware they were using. A linear model based on both experts identified each expert’s top-rated colonies as well as, or better than, the ratings of the other expert, as indicated by receiver operator characteristic curve analysis. By providing an understanding of the differences and similarities in disparate sets of expert ratings, this analysis helps to establish confidence in the ratings and the criteria for ratings, even when the experts disagree.