scispace - formally typeset
Search or ask a question
Author

Lucas von Chamier

Bio: Lucas von Chamier is an academic researcher from University College London. The author has contributed to research in topics: Deep learning & Tissue homeostasis. The author has an hindex of 6, co-authored 8 publications receiving 170 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: ZeroCostDL4Mic as discussed by the authors is an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab, which allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation, object detection, denoising, and image-to-image translation.
Abstract: Deep Learning (DL) methods are powerful analytical tools for microscopy and can outperform conventional image processing pipelines. Despite the enthusiasm and innovations fuelled by DL technology, the need to access powerful and compatible resources to train DL networks leads to an accessibility barrier that novice users often find difficult to overcome. Here, we present ZeroCostDL4Mic, an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab. ZeroCostDL4Mic allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation (using U-Net and StarDist), object detection (using YOLOv2), denoising (using CARE and Noise2Void), super-resolution microscopy (using Deep-STORM), and image-to-image translation (using Label-free prediction - fnet, pix2pix and CycleGAN). Importantly, we provide suitable quantitative tools for each network to evaluate model performance, allowing model optimisation. We demonstrate the application of the platform to study multiple biological processes.

210 citations

Journal ArticleDOI
TL;DR: How DL shows an outstanding potential to push the limits of microscopy, enhancing resolution, signal and information content in acquired data is discussed, along with the future directions expected in this field.
Abstract: Artificial Intelligence based on Deep Learning (DL) is opening new horizons in biomedical research and promises to revolutionize the microscopy field. It is now transitioning from the hands of experts in computer sciences to biomedical researchers. Here, we introduce recent developments in DL applied to microscopy, in a manner accessible to non-experts. We give an overview of its concepts, capabilities and limitations, presenting applications in image segmentation, classification and restoration. We discuss how DL shows an outstanding potential to push the limits of microscopy, enhancing resolution, signal and information content in acquired data. Its pitfalls are discussed, along with the future directions expected in this field.

65 citations

Posted ContentDOI
20 Mar 2020-bioRxiv
TL;DR: A platform simplifying access to DL by exploiting the free, cloud-based computational resources of Google Colab is presented, which allows researchers to train, evaluate, and apply key DL networks to perform tasks including segmentation, detection, denoising, restoration, resolution enhancement and image-to-image translation.
Abstract: Deep Learning (DL) methods are increasingly recognised as powerful analytical tools for microscopy. Their potential to outperform conventional image processing pipelines is now well established. Despite the enthusiasm and innovations fuelled by DL technology, the need to access powerful and compatible resources, install multiple computational tools and modify code instructions to train neural networks all lead to an accessibility barrier that novice users often find difficult to cross. Here, we present ZeroCostDL4Mic, an entry-level teaching and deployment DL platform which considerably simplifies access and use of DL for microscopy. It is based on Google Colab which provides the free, cloud-based computational resources needed. ZeroCostDL4Mic allows researchers with little or no coding expertise to quickly test, train and use popular DL networks. In parallel, it guides researchers to acquire more knowledge, to experiment with optimising DL parameters and network architectures. We also highlight the limitations and requirements to use Google Colab. Altogether, ZeroCostDL4Mic accelerates the uptake of DL for new users and promotes their capacity to use increasingly complex DL networks.

48 citations

Journal ArticleDOI
TL;DR: Overall, this work provides the first cellular evidence that mammalian LC3/GABARAP post-translationally modifies proteins akin to ubiquitination, with ATG4 proteases acting like deubiquitinating enzymes to counteract this modification (“deLC3ylation”).

33 citations

Journal ArticleDOI
TL;DR: A detailed analysis pipeline is provided illustrating how the deep learning network StarDist can be combined with the popular tracking software TrackMate to perform 2D automated cell tracking and provide fully quantitative readouts.
Abstract: The ability of cells to migrate is a fundamental physiological process involved in embryonic development, tissue homeostasis, immune surveillance, and wound healing. Therefore, the mechanisms governing cellular locomotion have been under intense scrutiny over the last 50 years. One of the main tools of this scrutiny is live-cell quantitative imaging, where researchers image cells over time to study their migration and quantitatively analyze their dynamics by tracking them using the recorded images. Despite the availability of computational tools, manual tracking remains widely used among researchers due to the difficulty setting up robust automated cell tracking and large-scale analysis. Here we provide a detailed analysis pipeline illustrating how the deep learning network StarDist can be combined with the popular tracking software TrackMate to perform 2D automated cell tracking and provide fully quantitative readouts. Our proposed protocol is compatible with both fluorescent and widefield images. It only requires freely available and open-source software (ZeroCostDL4Mic and Fiji), and does not require any coding knowledge from the users, making it a versatile and powerful tool for the field. We demonstrate this pipeline's usability by automatically tracking cancer cells and T cells using fluorescent and brightfield images. Importantly, we provide, as supplementary information, a detailed step-by-step protocol to allow researchers to implement it with their images.

32 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes.
Abstract: In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field.

1,129 citations

Journal ArticleDOI
TL;DR: ZeroCostDL4Mic as discussed by the authors is an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab, which allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation, object detection, denoising, and image-to-image translation.
Abstract: Deep Learning (DL) methods are powerful analytical tools for microscopy and can outperform conventional image processing pipelines. Despite the enthusiasm and innovations fuelled by DL technology, the need to access powerful and compatible resources to train DL networks leads to an accessibility barrier that novice users often find difficult to overcome. Here, we present ZeroCostDL4Mic, an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab. ZeroCostDL4Mic allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation (using U-Net and StarDist), object detection (using YOLOv2), denoising (using CARE and Noise2Void), super-resolution microscopy (using Deep-STORM), and image-to-image translation (using Label-free prediction - fnet, pix2pix and CycleGAN). Importantly, we provide suitable quantitative tools for each network to evaluate model performance, allowing model optimisation. We demonstrate the application of the platform to study multiple biological processes.

210 citations

Journal ArticleDOI
TL;DR: TissueNet as mentioned in this paper is a dataset for training segmentation models that contains more than 1 million manually labeled cells, an order of magnitude more than all previously published segmentation training datasets.
Abstract: A principal challenge in the analysis of tissue imaging data is cell segmentation—the task of identifying the precise boundary of every cell in an image. To address this problem we constructed TissueNet, a dataset for training segmentation models that contains more than 1 million manually labeled cells, an order of magnitude more than all previously published segmentation training datasets. We used TissueNet to train Mesmer, a deep-learning-enabled segmentation algorithm. We demonstrated that Mesmer is more accurate than previous methods, generalizes to the full diversity of tissue types and imaging platforms in TissueNet, and achieves human-level performance. Mesmer enabled the automated extraction of key cellular features, such as subcellular localization of protein signal, which was challenging with previous approaches. We then adapted Mesmer to harness cell lineage information in highly multiplexed datasets and used this enhanced version to quantify cell morphology changes during human gestation. All code, data and models are released as a community resource. Deep learning algorithms perform as well as humans in identifying cells in tissue images.

147 citations

Journal ArticleDOI
TL;DR: Using deep learning to augment SIM, a five-fold reduction in the number of raw images required for super-resolution SIM is obtained, and images under extreme low light conditions are generated.
Abstract: Structured illumination microscopy (SIM) surpasses the optical diffraction limit and offers a two-fold enhancement in resolution over diffraction limited microscopy. However, it requires both intense illumination and multiple acquisitions to produce a single high-resolution image. Using deep learning to augment SIM, we obtain a five-fold reduction in the number of raw images required for super-resolution SIM, and generate images under extreme low light conditions (at least 100× fewer photons). We validate the performance of deep neural networks on different cellular structures and achieve multi-color, live-cell super-resolution imaging with greatly reduced photobleaching. Super-resolution microscopy typically requires high laser powers which can induce photobleaching and degrade image quality. Here the authors augment structured illumination microscopy (SIM) with deep learning to reduce the number of raw images required and boost its performance under low light conditions.

128 citations

Journal ArticleDOI
TL;DR: TrackMate as mentioned in this paper is an automated tracking software used to analyze bioimages and is distributed as a Fiji plugin, which is built to address the broad spectrum of modern challenges researchers face by integrating state-of-the-art segmentation algorithms into tracking pipelines.
Abstract: TrackMate is an automated tracking software used to analyze bioimages and is distributed as a Fiji plugin. Here, we introduce a new version of TrackMate. TrackMate 7 is built to address the broad spectrum of modern challenges researchers face by integrating state-of-the-art segmentation algorithms into tracking pipelines. We illustrate qualitatively and quantitatively that these new capabilities function effectively across a wide range of bio-imaging experiments.

120 citations