scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Cytopathological image analysis using deep-learning networks in microfluidic microscopy.

TL;DR: This paper explores the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines and shows that the designed deep belief network as well as the deeply pretrained convolutional neural network outperform the conventionally used decision systems.
Abstract: Cytopathologic testing is one of the most critical steps in the diagnosis of diseases, including cancer. However, the task is laborious and demands skill. Associated high cost and low throughput drew considerable interest in automating the testing process. Several neural network architectures were designed to provide human expertise to machines. In this paper, we explore and propose the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines (K562, MOLT, and HL60). The cell images used in the classification are captured using a low-cost, high-throughput cell imaging technique: microfluidics-based imaging flow cytometry. We demonstrate that without any conventional fine segmentation followed by explicit feature extraction, the proposed deep-learning algorithms effectively classify the coarsely localized cell lines. We show that the designed deep belief network as well as the deeply pretrained convolutional neural network outperform the conventionally used decision systems and are important in the medical domain, where the availability of labeled data is limited for training. We hope that our work enables the development of a clinically significant high-throughput microfluidic microscopy-based tool for disease screening/triaging, especially in resource-limited settings.
Citations
More filters
Journal ArticleDOI
TL;DR: A roadmap to integrating deep learning and microfluidics in biotechnology laboratories that matches computational architectures to problem types, and provides an outlook on emerging opportunities is provided.

142 citations

Journal ArticleDOI
TL;DR: This paper addresses the challenging problem of determining the in-focus reconstruction depth of Madin-Darby canine kidney cell clusters encoded in digital holograms by addressing the challenging issue of deep convolutional neural network learning.
Abstract: Deep artificial neural network learning is an emerging tool in image analysis. We demonstrate its potential in the field of digital holographic microscopy by addressing the challenging problem of determining the in-focus reconstruction depth of Madin-Darby canine kidney cell clusters encoded in digital holograms. A deep convolutional neural network learns the in-focus depths from half a million hologram amplitude images. The trained network correctly determines the in-focus depth of new holograms with high probability, without performing numerical propagation. This paper reports on extensions to preliminary work published earlier as one of the first applications of deep learning in the field of digital holographic microscopy.

68 citations

Journal ArticleDOI
TL;DR: Experiments show that the pre-trained Convolutional Neural Network model outperforms conventionally used detection systems and provides at least 15% improvement in F-score on other state-of-the-art techniques.

45 citations

Journal ArticleDOI
TL;DR: A non-intrusive method is presented for measuring different fluidic properties in a microfluidic chip by optically monitoring the flow of droplets and can in principle be used to measure other properties of the fluid such as surface tension and viscosity.
Abstract: A non-intrusive method is presented for measuring different fluidic properties in a microfluidic chip by optically monitoring the flow of droplets. A neural network is used to extract the desired information from the images of the droplets. We demonstrate the method in two applications: measurement of the concentration of each component of a water/alcohol mixture, and measurement of the flow rate of the same mixture. A large number of droplet images are recorded and used to train deep neural networks (DNN) to predict the flow rate or the concentration. It is shown that this method can be used to quantify the concentrations of each component with a 0.5% accuracy and the flow rate with a resolution of 0.05 ml/h. The proposed method can in principle be used to measure other properties of the fluid such as surface tension and viscosity.

41 citations

Journal ArticleDOI
TL;DR: The feasibility of applying deep learning techniques to single‐cell optical image analysis is explored, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end‐to‐end learning have been reviewed.
Abstract: Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.

31 citations


Cites background from "Cytopathological image analysis usi..."

  • ...For example, there are reports to combine deep learning with imaging flow cytometry for high-throughput single-cell classification and identification (27,28) and for the reconstruction of continuous biological processes (130)....

    [...]

  • ...Deep learning brings automatic analysis methods for large numbers of microfluidic cell microscopy images, and single-cell optical image big data also make the deep learning model more generalized (27)....

    [...]

  • ...Deep learning technology has unique advantages in big data analysis, which have been reported to analyze big data from microfluidic microscopy and high-throughput imaging flow cytometry (27,28)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The analogy between images and statistical mechanics systems is made and the analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations, creating a highly parallel ``relaxation'' algorithm for MAP estimation.
Abstract: We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.

18,761 citations

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule.
Abstract: It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual "expert" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called "contrastive divergence" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.

5,150 citations

Journal ArticleDOI
TL;DR: The papers in this special section focus on the technology and applications supported by deep learning, which have proven to be powerful tools for a broad range of computer vision tasks.
Abstract: The papers in this special section focus on the technology and applications supported by deep learning. Deep learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications.

1,428 citations

Journal ArticleDOI
TL;DR: It is argued that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences.
Abstract: The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG) Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data This makes dynamic models critical in integrating theory and experiments We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences

986 citations