About: DICOM is a(n) research topic. Over the lifetime, 3375 publication(s) have been published within this topic receiving 35658 citation(s). The topic is also known as: Digital Imaging and Communications in Medicine.
29 Jun 2004-Journal of Digital Imaging
TL;DR: OsiriX was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies and ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program.
Abstract: A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program’s toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.
01 Feb 2004-International Journal of Medical Informatics
TL;DR: The goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools.
Abstract: Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization (∼1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools.
11 Jan 2010-BMC Medical Imaging
TL;DR: The design and validation of a cardiovascular image analysis software package (Segment) is presented and its release in a source code format is announced and made freely available for research purposes.
Abstract: Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se . Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
01 Jan 2012-
TL;DR: The SR SOP Classes allow users to link text and other data to particular images and/or waveforms and to store the coordinates of findings so that users can see exactly what is being described in a report.
Abstract: Foreword This Supplement to the DICOM Standard introduces the SR SOP Classes for transmission and storage of documents that describe or refer to any number of images or waveforms or to the specific features that they contain. The SR SOP Classes fully support conventional free text reports and provide the capability 4 to record structured information that enhances the precision, clarity and value of clinical documents. The SR SOP Classes allow users to link text and other data to particular images and/or waveforms and to store the coordinates of findings so that users can see exactly what is being described in a report. In addition, users can label, index and retrieve clinically-relevant information using codes. SR SOP Classes 8 can be used in a variety of clinical contexts. For example:-in CT or MRI to convey the interpretation text, to record the DICOM identifiers of selected images and to denote the spatial coordinates of significant findings;-in ultrasound to transmit measurements; and 12-in cardiac catheterization laboratories to record a procedure log that time-stamps and describes significant measurements and interventions and link together all of the related images, waveforms, interpretation reports and related information into a convenient unit-record.
21 Feb 2013-Frontiers in Human Neuroscience
TL;DR: A MATLAB toolbox named “Pipeline for Analyzing braiN Diffusion imAges” (PANDA) is developed, expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.
Abstract: Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named “Pipeline for Analyzing braiN Diffusion imAges” (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics (e.g., FA and MD) that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.