scispace - formally typeset
Search or ask a question
Author

Joshua D. Warner

Bio: Joshua D. Warner is an academic researcher from Mayo Clinic. The author has contributed to research in topics: Image segmentation & Autosomal dominant polycystic kidney disease. The author has an hindex of 6, co-authored 9 publications receiving 2519 citations.

Papers
More filters
Journal ArticleDOI
19 Jun 2014-PeerJ
TL;DR: The advantages of open source to achieve the goals of the scikit-image library are highlighted, and several real-world image processing applications that use scik it-image are showcased.
Abstract: scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

3,903 citations

Journal ArticleDOI
TL;DR: The results raise the possibility that subcortical structure atrophy is not independent in AD patients, and this complex structural information proved useful in the detailed interpretation of AD-related neurodegenerative process, as the multilevel approach showed both global and local atrophy on cortical and subcortex levels.
Abstract: Brain atrophy is a key imaging hallmark of Alzheimer disease (AD). In this study, we carried out an integrative evaluation of AD-related atrophy. Twelve patients with AD and 13 healthy controls were enrolled. We conducted a cross-sectional analysis of total brain tissue volumes with SIENAX. Localized gray matter atrophy was identified with optimized voxel-wise morphometry (FSL-VBM), and subcortical atrophy was evaluated by active shape model implemented in FMRIB’s Integrated Registration Segmentation Toolkit. SIENAX analysis demonstrated total brain atrophy in AD patients; voxel-based morphometry analysis showed atrophy in the bilateral mediotemporal regions and in the posterior brain regions. In addition, regarding the diminished volumes of thalami and hippocampi in AD patients, subsequent vertex analysis of the segmented structures indicated shrinkage of the bilateral anterior thalami and the left medial hippocampus. Interestingly, the volume of the thalami and hippocampi were highly correlated with the volume of the thalami and amygdalae on both sides in AD patients, but not in healthy controls. This complex structural information proved useful in the detailed interpretation of AD-related neurodegenerative process, as the multilevel approach showed both global and local atrophy on cortical and subcortical levels. Most importantly, our results raise the possibility that subcortical structure atrophy is not independent in AD patients.

54 citations

Journal ArticleDOI
TL;DR: An image processing approach that enables fast, cost-effective and reproducible quantification of ADPKD progression that will facilitate and lower the costs of clinical trials in AD PKD and other disorders requiring accurate, longitudinal kidney quantification.
Abstract: Background Renal imaging examinations provide high-resolution information about the anatomic structure of the kidneys and are used to measure total kidney volume (TKV) in autosomal dominant polycystic kidney disease (ADPKD) patients. TKV has become the gold-standard image biomarker for ADPKD progression at early stages of the disease and is used in clinical trials to characterize treatment efficacy. Automated methods to segment the kidneys and measure TKV are desirable because of the long time requirement for manual approaches such as stereology or planimetry tracings. However, ADPKD kidney segmentation is complicated by a number of factors, including irregular kidney shapes and variable tissue signal at the kidney borders. Methods We describe an image processing approach that overcomes these problems by using a baseline segmentation initialization to provide automatic segmentation of follow-up scans obtained years apart. We validated our approach using 20 patients with complete baseline and follow-up T1-weighted magnetic resonance images. Both manual tracing and stereology were used to calculate TKV, with two observers performing manual tracings and one observer performing repeat tracings. Linear correlation and Bland-Altman analysis were performed to compare the different approaches. Results Our automated approach measured TKV at a level of accuracy (mean difference ± standard error = 0.99 ± 0.79%) on par with both intraobserver (0.77 ± 0.46%) and interobserver variability (1.34 ± 0.70%) of manual tracings. All approaches had excellent agreement and compared favorably with ground-truth manual tracing with interobserver, stereological and automated approaches having 95% confidence intervals ∼ ± 100 mL. Conclusions Our method enables fast, cost-effective and reproducible quantification of ADPKD progression that will facilitate and lower the costs of clinical trials in ADPKD and other disorders requiring accurate, longitudinal kidney quantification. In addition, it will hasten the routine use of TKV as a prognostic biomarker in ADPKD.

50 citations

Journal ArticleDOI
TL;DR: In this article, the success of magnetization transfer (MT) imaging to characterize various tissue remodeling pathologies was tested on a murine model of autosomal dominant polycystic kidney disease (PKD).
Abstract: Purpose Noninvasive imaging techniques that quantify renal tissue composition are needed to more accurately ascertain prognosis and monitor disease progression in polycystic kidney disease (PKD) Given the success of magnetization transfer (MT) imaging to characterize various tissue remodeling pathologies, it was tested on a murine model of autosomal dominant PKD Methods C57Bl/6 Pkd1 R3277C mice at 9, 12, and 15 months were imaged with a 164T MR imaging system Images were acquired without and with RF saturation in order to calculate MT ratio (MTR) maps Following imaging, the mice were euthanized and kidney sections were analyzed for cystic and fibrotic indices, which were compared with statistical parameters of the MTR maps Results The MTR-derived mean, median, 25th percentile, skewness, and kurtosis were all closely related to indices of renal pathology, including kidney weight/body weight, cystic index, and percent of remaining parenchyma The correlation between MTR and histology-derived cystic and fibrotic changes was R2 = 084 and R2 = 070, respectively Conclusion MT imaging provides a new, noninvasive means of measuring tissue remodeling PKD changes and may be better suited for characterizing renal impairment compared with conventional MR techniques Magn Reson Med, 2015

33 citations

Journal ArticleDOI
TL;DR: An accurate, robust, efficient, and reproducible segmentation method for pre-operative LGGs that relies only on T2-weighted (T2W) and optionally post-contrast T1-weighting (T1W) images and the automated results are comparable with the experts’ manual segmentation results.
Abstract: Segmentation of pre-operative low-grade gliomas (LGGs) from magnetic resonance imaging is a crucial step for studying imaging biomarkers. However, segmentation of LGGs is particularly challenging because they rarely enhance after gadolinium administration. Like other gliomas, they have irregular tumor shape, heterogeneous composition, ill-defined tumor boundaries, and limited number of image types. To overcome these challenges we propose a semi-automated segmentation method that relies only on T2-weighted (T2W) and optionally post-contrast T1-weighted (T1W) images. First, the user draws a region-of-interest (ROI) that completely encloses the tumor and some normal tissue. Second, a normal brain atlas and post-contrast T1W images are registered to T2W images. Third, the posterior probability of each pixel/voxel belonging to normal and abnormal tissues is calculated based on information derived from the atlas and ROI. Finally, geodesic active contours use the probability map of the tumor to shrink the ROI until optimal tumor boundaries are found. This method was validated against the true segmentation (TS) of 30 LGG patients for both 2D (1 slice) and 3D. The TS was obtained from manual segmentations of three experts using the Simultaneous Truth and Performance Level Estimation (STAPLE) software. Dice and Jaccard indices and other descriptive statistics were computed for the proposed method, as well as the experts’ segmentation versus the TS. We also tested the method with the BraTS datasets, which supply expert segmentations. For 2D segmentation vs. TS, the mean Dice index was 0.90 ± 0.06 (standard deviation), sensitivity was 0.92, and specificity was 0.99. For 3D segmentation vs. TS, the mean Dice index was 0.89 ± 0.06, sensitivity was 0.91, and specificity was 0.99. The automated results are comparable with the experts’ manual segmentation results. We present an accurate, robust, efficient, and reproducible segmentation method for pre-operative LGGs.

31 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: SciPy as discussed by the authors is an open source scientific computing library for the Python programming language, which includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics.
Abstract: SciPy is an open source scientific computing library for the Python programming language. SciPy 1.0 was released in late 2017, about 16 years after the original version 0.1 release. SciPy has become a de facto standard for leveraging scientific algorithms in the Python programming language, with more than 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories, and millions of downloads per year. This includes usage of SciPy in almost half of all machine learning projects on GitHub, and usage by high profile projects including LIGO gravitational wave analysis and creation of the first-ever image of a black hole (M87). The library includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics. In this work, we provide an overview of the capabilities and development practices of the SciPy library and highlight some recent technical developments.

12,774 citations

Journal ArticleDOI
16 Sep 2020-Nature
TL;DR: In this paper, the authors review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data, and their evolution into a flexible interoperability layer between increasingly specialized computational libraries is discussed.
Abstract: Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis. NumPy is the primary array programming library for Python; here its fundamental concepts are reviewed and its evolution into a flexible interoperability layer between increasingly specialized computational libraries is discussed.

7,624 citations

Journal ArticleDOI
TL;DR: SciPy as discussed by the authors is an open-source scientific computing library for the Python programming language, which has become a de facto standard for leveraging scientific algorithms in Python, with over 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories and millions of downloads per year.
Abstract: SciPy is an open-source scientific computing library for the Python programming language. Since its initial release in 2001, SciPy has become a de facto standard for leveraging scientific algorithms in Python, with over 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories and millions of downloads per year. In this work, we provide an overview of the capabilities and development practices of SciPy 1.0 and highlight some recent technical developments.

6,244 citations

Journal ArticleDOI
TL;DR: How a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data is reviewed.
Abstract: Array programming provides a powerful, compact, expressive syntax for accessing, manipulating, and operating on data in vectors, matrices, and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It plays an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, material science, engineering, finance, and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves and the first imaging of a black hole. Here we show how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring, and analyzing scientific data. NumPy is the foundation upon which the entire scientific Python universe is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Because of its central position in the ecosystem, NumPy increasingly plays the role of an interoperability layer between these new array computation libraries.

4,342 citations

Posted Content
TL;DR: The entire ImageJ codebase was rewrote, engineering a redesigned plugin mechanism intended to facilitate extensibility at every level, with the goal of creating a more powerful tool that continues to serve the existing community while addressing a wider range of scientific requirements.
Abstract: ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs.

2,156 citations