Journal of Digital Imaging
Springer Science+Business Media
About: Journal of Digital Imaging is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Medicine & Computer science. It has an ISSN identifier of 0897-1889. Over the lifetime, 2813 publications have been published receiving 55266 citations. The journal is also known as: Journal of digital imaging (Print).
Topics: Medicine, Computer science, Picture archiving and communication system, DICOM, Artificial intelligence
Papers published on a yearly basis
TL;DR: The management tasks and user support model for TCIA is described, an open-source, open-access information resource to support research, development, and educational initiatives utilizing advanced medical imaging of cancer.
Abstract: The National Institutes of Health have placed significant emphasis on sharing of research data to support secondary research. Investigators have been encouraged to publish their clinical and imaging data as part of fulfilling their grant obligations. Realizing it was not sufficient to merely ask investigators to publish their collection of imaging and clinical data, the National Cancer Institute (NCI) created the open source National Biomedical Image Archive software package as a mechanism for centralized hosting of cancer related imaging. NCI has contracted with Washington University in Saint Louis to create The Cancer Imaging Archive (TCIA)—an open-source, open-access information resource to support research, development, and educational initiatives utilizing advanced medical imaging of cancer. In its first year of operation, TCIA accumulated 23 collections (3.3 million images). Operating and maintaining a high-availability image archive is a complex challenge involving varied archive-specific resources and driven by the needs of both image submitters and image consumers. Quality archives of any type (traditional library, PubMed, refereed journals) require management and customer service. This paper describes the management tasks and user support model for TCIA.
TL;DR: OsiriX was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies and ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program.
Abstract: A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program’s toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.
TL;DR: A critical appraisal of popular methods that have employed deep learning techniques for medical image segmentation is presented and the most common challenges incurred are summarized and suggest possible solutions.
Abstract: Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions.
TL;DR: An overview of current deep learning-based segmentation approaches for quantitative brain MRI is provided and a critical assessment of the current state and likely future developments and trends is provided.
Abstract: Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions Next, the performance, speed, and properties of deep learning approaches are summarized and discussed Finally, we provide a critical assessment of the current state and identify likely future developments and trends
TL;DR: The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved.
Abstract: The purpose of this project was to determine whether Contrast Limited Adaptive Histogram Equalization (CLAHE) improves detection of simulated spiculations in dense mammograms Lines simulating the appearance of spiculations, a common marker of malignancy when visualized with masses, were embedded in dense mammograms digitized at 50 micron pixels, 12 bits deep Film images with no CLAHE applied were compared to film images with nine different combinations of clip levels and region sizes applied A simulated spiculation was embedded in a background of dense breast tissue, with the orientation of the spiculation varied The key variables involved in each trial included the orientation of the spiculation, contrast level of the spiculation and the CLAHE settings applied to the image Combining the 10 CLAHE conditions, 4 contrast levels and 4 orientations gave 160 combinations The trials were constructed by pairing 160 combinations of key variables with 40 backgrounds Twenty student observers were asked to detect the orientation of the spiculation in the image There was a statistically significant improvement in detection performance for spiculations with CLAHE over unenhanced images when the region size was set at 32 with a clip level of 2, and when the region size was set at 32 with a clip level of 4 The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved