scispace - formally typeset
Search or ask a question
Author

Zoe Jäckel

Bio: Zoe Jäckel is an academic researcher from University of Freiburg. The author has contributed to research in topics: Prefrontal cortex & Motor control. The author has an hindex of 3, co-authored 5 publications receiving 811 citations. Previous affiliations of Zoe Jäckel include University of Freiburg Faculty of Biology.

Papers
More filters
Journal ArticleDOI
TL;DR: An ImageJ plugin is presented that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service.
Abstract: U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.

1,222 citations

Journal ArticleDOI
TL;DR: Optogenetic and electrophysiological techniques support the concept of opposing roles of IL and PL in directing proactive behavior and argue for an involvement of OFC in predominantly reactive movement control.

88 citations

Journal ArticleDOI
TL;DR: Corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.
Abstract: In the version of this paper originally published, one of the affiliations for Dominic Mai was incorrect: "Center for Biological Systems Analysis (ZBSA), Albert-Ludwigs-University, Freiburg, Germany" should have been "Life Imaging Center, Center for Biological Systems Analysis, Albert-Ludwigs-University, Freiburg, Germany." This change required some renumbering of subsequent author affiliations. These corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.

53 citations

Journal ArticleDOI
TL;DR: In this article, a triple-modal optrode was proposed for single-step optogenetic surgery, which combines a silicon-based neural probe with an integrated microfluidic channel, and an optical glass fiber in a compact assembly.
Abstract: Objective. Optogenetics involves delivery of light-sensitive opsins to the target brain region, as well as introduction of optical and electrical devices to manipulate and record neural activity, respectively, from the targeted neural population. Combining these functionalities in a single implantable device is of great importance for a precise investigation of neural networks while minimizing tissue damage.Approach. We report on the development, characterization, andin vivovalidation of a multifunctional optrode that combines a silicon-based neural probe with an integrated microfluidic channel, and an optical glass fiber in a compact assembly. The silicon probe comprises an 11-µm-wide fluidic channel and 32 recording electrodes (diameter 30µm) on a tapered probe shank with a length, thickness, and maximum width of 7.5 mm, 50µm, and 150µm, respectively. The size and position of fluidic channels, electrodes, and optical fiber can be precisely tuned according to thein vivoapplication.Main results.With a total system weight of 0.97 g, our multifunctional optrode is suitable for chronicin vivoexperiments requiring simultaneous drug delivery, optical stimulation, and neural recording. We demonstrate the utility of our device in optogenetics by injecting a viral vector carrying a ChR2-construct in the prefrontal cortex and subsequent photostimulation of the transduced neurons while recording neural activity from both the target and adjacent regions in a freely moving rat for up to 9 weeks post-implantation. Additionally, we demonstrate a pharmacological application of our device by injecting GABA antagonist bicuculline in an anesthetized rat brain and simultaneously recording the electrophysiological response.Significance. Our triple-modality device enables a single-step optogenetic surgery. In comparison to conventional multi-step surgeries, our approach achieves higher spatial specificity while minimizing tissue damage.

6 citations

Book ChapterDOI
TL;DR: Hardung et al. as discussed by the authors discuss models of prefrontal motor interactions, the impact of the behavioral paradigm, evidences for mPFC involvement in action control, and the anatomical connections between the medial prefrontal cortex and motor cortex.
Abstract: The rodent medial prefrontal cortex (mPFC) is typically considered to be involved in cognitive aspects of action control, e.g., decision making, rule learning and application, working memory and generally guiding adaptive behavior (Euston, Gruber, & McNaughton, 2012). These cognitive aspects often occur on relatively slow time scales, i.e., in the order of several trials within a block structure (Murakami, Shteingart, Loewenstein, & Mainen, 2017). In this way, the mPFC is able to set up a representational memory (Goldman-Rakic, 1987). On the other hand, the mPFC can also impact action control more directly (i.e., more on the motoric and less cognitive side). This impact on motor control manifests on faster time scales, i.e., on a single trial level (Hardung et al., 2017). While the more cognitive aspects have been reviewed previously as well as in other subchapters of this book, we explicitly focus on the latter aspect in this chapter, particularly on movement inhibition. We discuss models of prefrontal motor interactions, the impact of the behavioral paradigm, evidences for mPFC involvement in action control, and the anatomical connections between mPFC and motor cortex.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations

19 Nov 2012

1,653 citations

Journal ArticleDOI
TL;DR: UNet++ as mentioned in this paper proposes an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision, leading to a highly flexible feature fusion scheme.
Abstract: The state-of-the-art models for medical image segmentation are variants of U-Net and fully convolutional networks (FCN). Despite their success, these models have two limitations: (1) their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks. To overcome these two limitations, we propose UNet++, a new neural architecture for semantic and instance segmentation, by (1) alleviating the unknown network depth with an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision; (2) redesigning skip connections to aggregate features of varying semantic scales at the decoder sub-networks, leading to a highly flexible feature fusion scheme; and (3) devising a pruning scheme to accelerate the inference speed of UNet++. We have evaluated UNet++ using six different medical image segmentation datasets, covering multiple imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and electron microscopy (EM), and demonstrating that (1) UNet++ consistently outperforms the baseline models for the task of semantic segmentation across different datasets and backbone architectures; (2) UNet++ enhances segmentation quality of varying-size objects—an improvement over the fixed-depth U-Net; (3) Mask RCNN++ (Mask R-CNN with UNet++ design) outperforms the original Mask R-CNN for the task of instance segmentation; and (4) pruned UNet++ models achieve significant speedup while showing only modest performance degradation. Our implementation and pre-trained models are available at https://github.com/MrGiovanni/UNetPlusPlus .

1,487 citations

Journal ArticleDOI
TL;DR: An ImageJ plugin is presented that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service.
Abstract: U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.

1,222 citations

Journal ArticleDOI
TL;DR: The intersection between deep learning and cellular image analysis is reviewed and an overview of both the mathematical mechanics and the programming frameworks of deep learning that are pertinent to life scientists are provided.
Abstract: Recent advances in computer vision and machine learning underpin a collection of algorithms with an impressive ability to decipher the content of images. These deep learning algorithms are being applied to biological images and are transforming the analysis and interpretation of imaging data. These advances are positioned to render difficult analyses routine and to enable researchers to carry out new, previously impossible experiments. Here we review the intersection between deep learning and cellular image analysis and provide an overview of both the mathematical mechanics and the programming frameworks of deep learning that are pertinent to life scientists. We survey the field's progress in four key applications: image classification, image segmentation, object tracking, and augmented microscopy. Last, we relay our labs' experience with three key aspects of implementing deep learning in the laboratory: annotating training data, selecting and training a range of neural network architectures, and deploying solutions. We also highlight existing datasets and implementations for each surveyed application.

714 citations