scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Digital Imaging in 2011"


Journal ArticleDOI
TL;DR: By integrating pixel-based and metadata-based image feature analysis, substantial advances of CBIR in medicine could ensue, with CBIR systems becoming an important tool in radiology practice.
Abstract: Diagnostic radiology requires accurate interpretation of complex signals in medical images. Content-based image retrieval (CBIR) techniques could be valuable to radiologists in assessing medical images by identifying similar images in large archives that could assist with decision support. Many advances have occurred in CBIR, and a variety of systems have appeared in nonmedical domains; however, permeation of these methods into radiology has been limited. Our goal in this review is to survey CBIR methods and systems from the perspective of application to radiology and to identify approaches developed in nonmedical applications that could be translated to radiology. Radiology images pose specific challenges compared with images in the consumer domain; they contain varied, rich, and often subtle features that need to be recognized in assessing image similarity. Radiology images also provide rich opportunities for CBIR: rich metadata about image semantics are provided by radiologists, and this information is not yet being used to its fullest advantage in CBIR systems. By integrating pixel-based and metadata-based image feature analysis, substantial advances of CBIR in medicine could ensue, with CBIR systems becoming an important tool in radiology practice.

360 citations


Journal ArticleDOI
TL;DR: The experimental results show the ability of hiding patient’s data with a very good visual quality, while ROI, the most important area for diagnosis, is retrieved exactly at the receiver side, and the scheme shows some robustness against certain levels of salt and pepper and cropping noise.
Abstract: Authenticating medical images using watermarking techniques has become a very popular area of research, and some works in this area have been reported worldwide recently. Besides authentication, many data-hiding techniques have been proposed to conceal patient’s data into medical images aiming to reduce the cost needed to store data and the time needed to transmit data when required. In this paper, we present a new hybrid watermarking scheme for DICOM images. In our scheme, two well-known techniques are combined to gain the advantages of both and fulfill the requirements of authentication and data hiding. The scheme divides the images into two parts, the region of interest (ROI) and the region of non-interest (RONI). Patient’s data are embedded into ROI using a reversible technique based on difference expansion, while tamper detection and recovery data are embedded into RONI using a robust technique based on discrete wavelet transform. The experimental results show the ability of hiding patient’s data with a very good visual quality, while ROI, the most important area for diagnosis, is retrieved exactly at the receiver side. The scheme also shows some robustness against certain levels of salt and pepper and cropping noise.

132 citations


Journal ArticleDOI
TL;DR: Qualitatively and quantitatively demonstrated on 41 breast DCE-MRI studies that textural kinetic features outperform signal intensity kinetics and lesion morphology features in distinguishing benign from malignant lesions.
Abstract: Dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) of the breast has emerged as an adjunct imaging tool to conventional X-ray mammography due to its high detection sensitivity. Despite the increasing use of breast DCE-MRI, specificity in distinguishing malignant from benign breast lesions is low, and interobserver variability in lesion classification is high. The novel contribution of this paper is in the definition of a new DCE-MRI descriptor that we call textural kinetics, which attempts to capture spatiotemporal changes in breast lesion texture in order to distinguish malignant from benign lesions. We qualitatively and quantitatively demonstrated on 41 breast DCE-MRI studies that textural kinetic features outperform signal intensity kinetics and lesion morphology features in distinguishing benign from malignant lesions. A probabilistic boosting tree (PBT) classifier in conjunction with textural kinetic descriptors yielded an accuracy of 90%, sensitivity of 95%, specificity of 82%, and an area under the curve (AUC) of 0.92. Graph embedding, used for qualitative visualization of a low-dimensional representation of the data, showed the best separation between benign and malignant lesions when using textural kinetic features. The PBT classifier results and trends were also corroborated via a support vector machine classifier which showed that textural kinetic features outperformed the morphological, static texture, and signal intensity kinetics descriptors. When textural kinetic attributes were combined with morphologic descriptors, the resulting PBT classifier yielded 89% accuracy, 99% sensitivity, 76% specificity, and an AUC of 0.91.

129 citations


Journal ArticleDOI
TL;DR: A fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images that utilizes concepts of public-key cryptography and reversible data-hiding technique is presented.
Abstract: Teleradiology applications and universal availability of patient records using web-based technology are rapidly gaining importance. Consequently, digital medical image security has become an important issue when images and their pertinent patient information are transmitted across public networks, such as the Internet. Health mandates such as the Health Insurance Portability and Accountability Act require healthcare providers to adhere to security measures in order to protect sensitive patient information. This paper presents a fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images. The scheme utilizes concepts of public-key cryptography and reversible data-hiding technique. The scheme was tested using medical images in DICOM format. The results show that the scheme is able to ensure image authenticity and integrity, and to locate tampered regions in the images.

114 citations


Journal ArticleDOI
TL;DR: A fully automated and three-dimensional segmentation method for the identification of the pulmonary parenchyma in thorax X-ray computed tomography (CT) datasets is proposed, which proved to be fit for the use in the framework of a CAD system for malignant lung nodule detection.
Abstract: A fully automated and three-dimensional (3D) segmentation method for the identification of the pulmonary parenchyma in thorax X-ray computed tomography (CT) datasets is proposed. It is meant to be used as pre-processing step in the computer-assisted detection (CAD) system for malignant lung nodule detection that is being developed by the Medical Applications in a Grid Infrastructure Connection (MAGIC-5) Project. In this new approach the segmentation of the external airways (trachea and bronchi), is obtained by 3D region growing with wavefront simulation and suitable stop conditions, thus allowing an accurate handling of the hilar region, notoriously difficult to be segmented. Particular attention was also devoted to checking and solving the problem of the apparent ‘fusion’ between the lungs, caused by partial-volume effects, while 3D morphology operations ensure the accurate inclusion of all the nodules (internal, pleural, and vascular) in the segmented volume. The new algorithm was initially developed and tested on a dataset of 130 CT scans from the Italung-CT trial, and was then applied to the ANODE09-competition images (55 scans) and to the LIDC database (84 scans), giving very satisfactory results. In particular, the lung contour was adequately located in 96% of the CT scans, with incorrect segmentation of the external airways in the remaining cases. Segmentation metrics were calculated that quantitatively express the consistency between automatic and manual segmentations: the mean overlap degree of the segmentation masks is 0.96 ± 0.02, and the mean and the maximum distance between the mask borders (averaged on the whole dataset) are 0.74 ± 0.05 and 4.5 ± 1.5, respectively, which confirms that the automatic segmentations quite correctly reproduce the borders traced by the radiologist. Moreover, no tissue containing internal and pleural nodules was removed in the segmentation process, so that this method proved to be fit for the use in the framework of a CAD system. Finally, in the comparison with a two-dimensional segmentation procedure, inter-slice smoothness was calculated, showing that the masks created by the 3D algorithm are significantly smoother than those calculated by the 2D-only procedure.

93 citations


Journal ArticleDOI
TL;DR: This paper reviews volumetric image visualization pipelines, algorithms, and medical applications, and integrates research results relating to new visualization, classification, enhancement, and multimodal data dynamic rendering.
Abstract: With the increasing availability of high-resolution isotropic three- or four-dimensional medical datasets from sources such as magnetic resonance imaging, computed tomography, and ultrasound, volumetric image visualization techniques have increased in importance. Over the past two decades, a number of new algorithms and improvements have been developed for practical clinical image display. More recently, further efficiencies have been attained by designing and implementing volume-rendering algorithms on graphics processing units (GPUs). In this paper, we review volumetric image visualization pipelines, algorithms, and medical applications. We also illustrate our algorithm implementation and evaluation results, and address the advantages and drawbacks of each algorithm in terms of image quality and efficiency. Within the outlined literature review, we have integrated our research results relating to new visualization, classification, enhancement, and multimodal data dynamic rendering. Finally, we illustrate issues related to modern GPU working pipelines, and their applications in volume visualization domain.

83 citations


Journal ArticleDOI
TL;DR: The results confirm the potential of the proposed algorithm to allow reliable segmentation and quantification of breast lesion in mammograms and quantify the traditional watershed transformation to obtain the lesion boundary in the belt between the internal and external markers.
Abstract: Lesion segmentation, which is a critical step in computer-aided diagnosis system, is a challenging task as lesion boundaries are usually obscured, irregular, and low contrast. In this paper, an accurate and robust algorithm for the automatic segmentation of breast lesions in mammograms is proposed. The traditional watershed transformation is applied to the smoothed (by the morphological reconstruction) morphological gradient image to obtain the lesion boundary in the belt between the internal and external markers. To automatically determine the internal and external markers, the rough region of the lesion is identified by a template matching and a thresholding method. Then, the internal marker is determined by performing a distance transform and the external marker by morphological dilation. The proposed algorithm is quantitatively compared to the dynamic programming boundary tracing method and the plane fitting and dynamic programming method on a set of 363 lesions (size range, 5–42 mm in diameter; mean, 15 mm), using the area overlap metric (AOM), Hausdorff distance (HD), and average minimum Euclidean distance (AMED). The mean ± SD of the values of AOM, HD, and AMED for our method were respectively 0.72 ± 0.13, 5.69 ± 2.85 mm, and 1.76 ± 1.04 mm, which is a better performance than two other proposed segmentation methods. The results also confirm the potential of the proposed algorithm to allow reliable segmentation and quantification of breast lesion in mammograms.

82 citations


Journal ArticleDOI
TL;DR: This paper presents a fast and efficient method for classifying X-ray images using random forests with proposed local wavelet-based local binary pattern (LBP) to improve image classification performance and reduce training and testing time.
Abstract: This paper presents a fast and efficient method for classifying X-ray images using random forests with proposed local wavelet-based local binary pattern (LBP) to improve image classification performance and reduce training and testing time. Most studies on local binary patterns and its modifications, including centre symmetric LBP (CS-LBP), focus on using image pixels as descriptors. To classify X-ray images, we first extract local wavelet-based CS-LBP (WCS-LBP) descriptors from local parts of the images to describe the wavelet-based texture characteristic. Then we apply the extracted feature vector to decision trees to construct random forests, which are an ensemble of random decision trees. Using the random forests with local WCS-LBP, we classified one test image into the category having the maximum posterior probability. Compared with other feature descriptors and classifiers, the proposed method shows both improved performance and faster processing time.

79 citations


Journal ArticleDOI
TL;DR: Dicoogle is a PACS archive supported by a document-based indexing system and by peer-to-peer (P2P) protocols, which permits gathering and indexing data from file-based repositories, which allows searching the archive through free text queries.
Abstract: Picture Archiving and Communication Systems (PACS) have been widely deployed in healthcare institutions, and they now constitute a normal commodity for practitioners. However, its installation, maintenance, and utilization are still a burden due to their heavy structures, typically supported by centralized computational solutions. In this paper, we present Dicoogle, a PACS archive supported by a document-based indexing system and by peer-to-peer (P2P) protocols. Replacing the traditional database storage (RDBMS) by a documental organization permits gathering and indexing data from file-based repositories, which allows searching the archive through free text queries. As a direct result of this strategy, more information can be extracted from medical imaging repositories, which clearly increases flexibility when compared with current query and retrieval DICOM services. The inclusion of P2P features allows PACS internetworking without the need for a central management framework. Moreover, Dicoogle is easy to install, manage, and use, and it maintains full interoperability with standard DICOM services.

77 citations


Journal ArticleDOI
TL;DR: This approach provides cardiac radiologists a practical method for an accurate segmentation of the left ventricle using a number of image processing and analysis techniques including thresholding, edge detection, mathematical morphology, and image filtering to build an efficient process flow.
Abstract: Segmentation of the left ventricle is important in the assessment of cardiac functional parameters. Manual segmentation of cardiac cine MR images for acquiring these parameters is time-consuming. Accuracy and automation are the two important criteria in improving cardiac image segmentation methods. In this paper, we present a comprehensive approach to segment the left ventricle from short axis cine cardiac MR images automatically. Our method incorporates a number of image processing and analysis techniques including thresholding, edge detection, mathematical morphology, and image filtering to build an efficient process flow. This process flow makes use of various features in cardiac MR images to achieve high accurate segmentation results. Our method was tested on 45 clinical short axis cine cardiac images and the results are compared with manual delineated ground truth (average perpendicular distance of contours near 2 mm and mean myocardium mass overlapping over 90%). This approach provides cardiac radiologists a practical method for an accurate segmentation of the left ventricle.

76 citations


Journal ArticleDOI
TL;DR: This research applied hue saturation value brightness correction and contrast-limited adaptive histogram equalization and using template matching with normalized cross-correlation to fundus images extracted hemorrhages and analyzed the cause of false positive (FP) and false negative in the detection of retinal hemorrhage.
Abstract: Image processing of a fundus image is performed for the early detection of diabetic retinopathy. Recently, several studies have proposed that the use of a morphological filter may help extract hemorrhages from the fundus image; however, extraction of hemorrhages using template matching with templates of various shapes has not been reported. In our study, we applied hue saturation value brightness correction and contrast-limited adaptive histogram equalization to fundus images. Then, using template matching with normalized cross-correlation, the candidate hemorrhages were extracted. Region growing thereafter reconstructed the shape of the hemorrhages which enabled us to calculate the size of the hemorrhages. To reduce the number of false positives, compactness and the ratio of bounding boxes were used. We also used the 5 × 5 kernel value of the hemorrhage and a foveal filter as other methods of false positive reduction in our study. In addition, we analyzed the cause of false positive (FP) and false negative in the detection of retinal hemorrhage. Combining template matching in various ways, our program achieved a sensitivity of 85% at 4.0 FPs per image. The result of our research may help the clinician in the diagnosis of diabetic retinopathy and might be a useful tool for early detection of diabetic retinopathy progression especially in the telemedicine.

Journal ArticleDOI
TL;DR: OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform, raising the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution.
Abstract: Medical imaging is commonly used to diagnose many emergent conditions, as well as plan treatment. Digital images can be reviewed on almost any computing platform. Modern mobile phones and handheld devices are portable computing platforms with robust software programming interfaces, powerful processors, and high-resolution displays. OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform. This raises the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution, facilitating communication among radiologists and referring clinicians.

Journal ArticleDOI
TL;DR: Comparison with other databases currently available has shown that the presented database has a sufficient number of images, is of high quality, and is the only one to include a functional search system.
Abstract: Considering the difficulties in finding good-quality images for the development and test of computer-aided diagnosis (CAD), this paper presents a public online mammographic images database free for all interested viewers and aimed to help develop and evaluate CAD schemes. The digitalization of the mammographic images is made with suitable contrast and spatial resolution for processing purposes. The broad recuperation system allows the user to search for different images, exams, or patient characteristics. Comparison with other databases currently available has shown that the presented database has a sufficient number of images, is of high quality, and is the only one to include a functional search system.

Journal ArticleDOI
TL;DR: The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels which performs better than the other known algorithms in terms of accuracy.
Abstract: This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

Journal ArticleDOI
TL;DR: A fully automated computer-aided detection (CAD) scheme for detecting aneurysms on 3D time-of-flight (TOF) MRA images showed good accuracy and may have application in improving the sensitivity ofAneurysm detection on MR images.
Abstract: Intracranial aneurysms represent a significant cause of morbidity and mortality. While the risk factors for aneurysm formation are known, the detection of aneurysms remains challenging. Magnetic resonance angiography (MRA) has recently emerged as a useful non-invasive method for aneurysm detection. However, even for experienced neuroradiologists, the sensitivity to small (<5 mm) aneurysms in MRA images is poor, on the order of 30~60% in recent, large series. We describe a fully automated computer-aided detection (CAD) scheme for detecting aneurysms on 3D time-of-flight (TOF) MRA images. The scheme locates points of interest (POIs) on individual MRA datasets by combining two complementary techniques. The first technique segments the intracranial arteries automatically and finds POIs from the segmented vessels. The second technique identifies POIs directly from the raw, unsegmented image dataset. This latter technique is useful in cases of incomplete segmentation. Following a series of feature calculations, a small fraction of POIs are retained as candidate aneurysms from the collected POIs according to predetermined rules. The CAD scheme was evaluated on 287 datasets containing 147 aneurysms that were verified with digital subtraction angiography, the accepted standard of reference for aneurysm detection. For two different operating points, the CAD scheme achieved a sensitivity of 80% (71% for aneurysms less than 5 mm) with three mean false positives per case, and 95% (91% for aneurysms less than 5 mm) with nine mean false positives per case. In conclusion, the CAD scheme showed good accuracy and may have application in improving the sensitivity of aneurysm detection on MR images.

Journal ArticleDOI
TL;DR: Assessment of accuracy and reproducibility of cone-beam computed tomography measurements of a human dry skull by comparing them to direct digital caliper measurements found accuracy of measurements of various distances on a human skull obtained from different CBCT units and image types is comparable to that of digital calipers measurements.
Abstract: The purpose of this study is to assess the accuracy and reproducibility of cone-beam computed tomography (CBCT) measurements of a human dry skull by comparing them to direct digital caliper measurements. Heated gutta-percha was used to mark 13 specific distances on a human skull, and the distances were directly measured using a digital caliper and on CBCT images obtained with Iluma (3M Imtec, OK, USA) and 3D Accuitomo 170 (3D Accuitomo; J Morita Mfg. Corp., Kyoto, Japan) CBCT imaging systems. Iluma images were obtained at 120 kVp and 3.8 mA and reconstructed using voxel sizes of 0.2 and 0.3 mm3. Accuitomo images were obtained at 60 kVp and 2 mA and a voxel size of 0.250 mm3. In addition, 3-D reconstructions were produced from images obtained from both systems. All measurements were made independently by three trained observers and were repeated after an interval of 1 week. Agreement between observers and image type was assessed by calculating Pearson correlation coefficients, with a level of significance set at p < 0.05. Pearson correlation coefficients between readings ranged from 0.995 to 1 for all image types. Correlations among observers were also very high, ranging from 0.992 to 1 for the first reading and from 0.992 to 1 for the second reading for the different image types. All CBCT image measurements were identical and highly correlated with digital caliper measurements. Accuracy of measurements of various distances on a human skull obtained from different CBCT units and image types is comparable to that of digital caliper measurements.

Journal ArticleDOI
Steve G. Langer1
TL;DR: This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents.
Abstract: Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher’s need grows by leveraging the on-demand provisioning ability of cloud computing.

Journal ArticleDOI
TL;DR: The next generation of picture archiving and communication systems will be leveraging cloud technology, providing massively scalable applications as well as highly managed remote services.
Abstract: Cloud computing has gathered significant attention from information technology (IT) vendors in providing massively scalable applications as well as highly managed remote services. What is cloud computing and how will it impact the medical IT market? Will the next generation of picture archiving and communication systems be leveraging cloud technology?

Journal ArticleDOI
TL;DR: A fast, fully automatic segmentation algorithm based on statistical model analysis and improved curve evolution for extracting the 3-D cerebral vessels from a time-of-flight (TOF) MRA dataset is presented and its accuracy and speed make this novel algorithm more suitable for a clinical computer-aided diagnosis system.
Abstract: The precise three-dimensional (3-D) segmentation of cerebral vessels from magnetic resonance angiography (MRA) images is essential for the detection of cerebrovascular diseases (e.g., occlusion, aneurysm). The complex 3-D structure of cerebral vessels and the low contrast of thin vessels in MRA images make precise segmentation difficult. We present a fast, fully automatic segmentation algorithm based on statistical model analysis and improved curve evolution for extracting the 3-D cerebral vessels from a time-of-flight (TOF) MRA dataset. Cerebral vessels and other tissue (brain tissue, CSF, and bone) in TOF MRA dataset are modeled by Gaussian distribution and combination of Rayleigh with several Gaussian distributions separately. The region distribution combined with gradient information is used in edge-strength of curve evolution as one novel mode. This edge-strength function is able to determine the boundary of thin vessels with low contrast around brain tissue accurately and robustly. Moreover, a fast level set method is developed to implement the curve evolution to assure high efficiency of the cerebrovascular segmentation. Quantitative comparisons with 10 sets of manual segmentation results showed that the average volume sensitivity, the average branch sensitivity, and average mean absolute distance error are 93.6%, 95.98%, and 0.333 mm, respectively. By applying the algorithm to 200 clinical datasets from three hospitals, it is demonstrated that the proposed algorithm can provide good quality segmentation capable of extracting a vessel with a one-voxel diameter in less than 2 min. Its accuracy and speed make this novel algorithm more suitable for a clinical computer-aided diagnosis system.

Journal ArticleDOI
TL;DR: This paper presents an automatic computer-aided detection scheme on digital chest radiographs to detect pneumoconiosis that has a higher accuracy and a more convenient interaction, and is very helpful to mass screening of pneumconiosis in clinic.
Abstract: This paper presents an automatic computer-aided detection scheme on digital chest radiographs to detect pneumoconiosis. Firstly, the lung fields are segmented from a digital chest X-ray image by using the active shape model method. Then, the lung fields are subdivided into six non-overlapping regions, according to Chinese diagnosis criteria of pneumoconiosis. The multi-scale difference filter bank is applied to the chest image to enhance the details of the small opacities, and the texture features are calculated from each region of the original and the processed images, respectively. After extracting the most relevant ones from the feature sets, support vector machine classifiers are utilized to separate the samples into the normal and the abnormal sets. Finally, the final classification is performed by the chest-based report-out and the classification probability values of six regions. Experiments are conducted on randomly selected images from our chest database. Both the training and the testing sets have 300 normal and 125 pneumoconiosis cases. In the training phase, training models and weighting factors for each region are derived. We evaluate the scheme using the full feature vectors or the selected feature vectors of the testing set. The results show that the classification performances are high. Compared with the previous methods, our fully automated scheme has a higher accuracy and a more convenient interaction. The scheme is very helpful to mass screening of pneumoconiosis in clinic.

Journal ArticleDOI
TL;DR: This paper presents an edge following technique for boundary extraction in carpal bone images and applies it to assess bone age in young children, and shows that the SVR is able to provide more accurate bone age assessment results than the NNR.
Abstract: Boundary extraction of carpal bone images is a critical operation of the automatic bone age assessment system, since the contrast between the bony structure and soft tissue are very poor. In this paper, we present an edge following technique for boundary extraction in carpal bone images and apply it to assess bone age in young children. Our proposed technique can detect the boundaries of carpal bones in X-ray images by using the information from the vector image model and the edge map. Feature analysis of the carpal bones can reveal the important information for bone age assessment. Five features for bone age assessment are calculated from the boundary extraction result of each carpal bone. All features are taken as input into the support vector regression (SVR) that assesses the bone age. We compare the SVR with the neural network regression (NNR). We use 180 images of carpal bone from a digital hand atlas to assess the bone age of young children from 0 to 6 years old. Leave-one-out cross validation is used for testing the efficiency of the techniques. The opinions of the skilled radiologists provided in the atlas are used as the ground truth in bone age assessment. The SVR is able to provide more accurate bone age assessment results than the NNR. The experimental results from SVR are very close to the bone age assessment by skilled radiologists.

Journal ArticleDOI
TL;DR: The “DICOM Index Tracker©” (DIT) transparently captures desired digital imaging and communications in medicine (DICom) tags from CT, nuclear imaging equipment, and other DICOM devices across an enterprise and is standardized for international comparisons.
Abstract: The U.S. National Press has brought to full public discussion concerns regarding the use of medical radiation, specifically x-ray computed tomography (CT), in diagnosis. A need exists for developing methods whereby assurance is given that all diagnostic medical radiation use is properly prescribed, and all patients’ radiation exposure is monitored. The “DICOM Index Tracker©” (DIT) transparently captures desired digital imaging and communications in medicine (DICOM) tags from CT, nuclear imaging equipment, and other DICOM devices across an enterprise. Its initial use is recording, monitoring, and providing automatic alerts to medical professionals of excursions beyond internally determined trigger action levels of radiation. A flexible knowledge base, aware of equipment in use, enables automatic alerts to system administrators of newly identified equipment models or software versions so that DIT can be adapted to the new equipment or software. A dosimetry module accepts mammography breast organ dose, skin air kerma values from XA modalities, exposure indices from computed radiography, etc. upon receipt. The American Association of Physicists in Medicine recommended a methodology for effective dose calculations which are performed with CT units having DICOM structured dose reports. Web interface reporting is provided for accessing the database in real-time. DIT is DICOM-compliant and, thus, is standardized for international comparisons. Automatic alerts currently in use include: email, cell phone text message, and internal pager text messaging. This system extends the utility of DICOM for standardizing the capturing and computing of radiation dose as well as other quality measures.

Journal ArticleDOI
TL;DR: This approach fully addresses current limitations in meeting accreditation criteria, eliminates the need for paper logs at a XA console, and provides a method where automated ALARA montoring is possible including email and pager alerts.
Abstract: This software tool locates and computes the intensity of radiation skin dose resulting from fluoroscopically guided interventional procedures. It is comprised of multiple modules. Using standardized body specific geometric values, a software module defines a set of male and female patients arbitarily positioned on a fluoroscopy table. Simulated X-ray angiographic (XA) equipment includes XRII and digital detectors with or without bi-plane configurations and left and right facing tables. Skin dose estimates are localized by computing the exposure to each 0.01 × 0.01 m2 on the surface of a patient irradiated by the X-ray beam. Digital Imaging and Communications in Medicine (DICOM) Structured Report Dose data sent to a modular dosimetry database automatically extracts the 11 XA tags necessary for peak skin dose computation. Skin dose calculation software uses these tags (gantry angles, air kerma at the patient entrance reference point, etc.) and applies appropriate corrections of exposure and beam location based on each irradiation event (fluoroscopy and acquistions). A physicist screen records the initial validation of the accuracy, patient and equipment geometry, DICOM compliance, exposure output calibration, backscatter factor, and table and pad attenuation once per system. A technologist screen specifies patient positioning, patient height and weight, and physician user. Peak skin dose is computed and localized; additionally, fluoroscopy duration and kerma area product values are electronically recorded and sent to the XA database. This approach fully addresses current limitations in meeting accreditation criteria, eliminates the need for paper logs at a XA console, and provides a method where automated ALARA montoring is possible including email and pager alerts.

Journal ArticleDOI
TL;DR: A systematic review on which type of CBCT-based DICOM images have been used for the evaluation of the fate of bone grafts in humans and a software suggested in the literature to test DicOM-based data sets is used, exemplifying the effect of variation in selected parameters on the final image characteristics.
Abstract: Previous studies suggests that cone beam computerized tomography (CBCT) images could provide reliable information regarding the fate of bone grafts in the maxillofacial region, but no systematic information regarding the standardization of CBCT settings and properties is available, i.e., there is a lack of information on how the images were generated, exported, and analyzed when bone grafts were evaluated. The aim of this study was to (1) do a systematic review on which type of CBCT-based DICOM images have been used for the evaluation of the fate of bone grafts in humans and (2) use a software suggested in the literature to test DICOM-based data sets, exemplifying the effect of variation in selected parameters (windowing/contrast control, plane definition, slice thickness, and number of measured slices) on the final image characteristics. The results from review identified three publications that used CBCT to evaluate maxillofacial bone grafts in humans, and in which the methodology/results comprised at least one of the expected outcomes (image acquisition protocol, image reconstruction, and image generation information). The experimental shows how the influence of information that was missing in the retrieved papers, can influence the reproducibility and the validity of image measurements. Although the use of CBCT-based images for the evaluation of bone grafts in humans has become more common, this does not reflect on a better standardization of the developed studies. Parameters regarding image acquisition and reconstruction, while important, are not addressed in the proper way in the literature, compromising the reproducibility and scientific impact of the studies.

Journal ArticleDOI
TL;DR: Micro-CT was found to be the best imaging method for the ex vivo measurement of occlusal caries depth and both CBCT units performed similarly and better than intra-oral modalities.
Abstract: The study aimed to assess the accuracy and reproducibility of occlusal caries depth measurements obtained from different imaging modalities. The study comprised 21 human mandibular molar teeth with occlusal caries. Teeth were imaged using film, CCD, two different cone-beam computerized tomography (CBCT) units and a microcomputer tomography (micro-CT). Thereafter, each tooth was serially sectioned, and the section with the deepest carious lesion was scanned using a high-resolution scanner. Each image set was separately viewed by three oral radiologists. Images were viewed randomly, and each set was viewed twice. Lesion depth was measured on film images using a digital caliper, on CCD and CBCT images using built-in measurement tools, on micro-CT images using the Mimics software program, and on histological images using AxioVision Rel. 4.7. Intra- and inter-rater reliabilities were assessed according to the Bland/Altman method by calculating Intraclass Correlation Coefficients (ICCs). Mean/median values obtained with intraoral systems were lower than those obtained with 3-D and histological images for all observers and both readings. Intra-observer ICC values for all observers were highest for histology and micro-CT. In addition, intra-observer ICC values were higher for histology and CBCT than for histology and intra-oral methods. Inter-observer ICC values for first and second readings were high for all observers. No differences in repeatability were found between Accuitomo and Iluma CBCT images or between intra-oral film and CCD images. Micro-CT was found to be the best imaging method for the ex vivo measurement of occlusal caries depth. In addition, both CBCT units performed similarly and better than intra-oral modalities.

Journal ArticleDOI
TL;DR: A centralized, server-based solution for the collection, archival, and distribution of rejected image and exposure indicator data that automates the data collection process and demonstrates that reject analysis is still necessary and useful in the era of digital imaging.
Abstract: Rejected images represent both unnecessary radiation exposure to patients and inefficiency in the imaging operation. Rejected images are inherent to projection radiography, where patient positioning and alignment are integral components of image quality. Patient motion and artifacts unique to digital image receptor technology can result in rejected images also. We present a centralized, server-based solution for the collection, archival, and distribution of rejected image and exposure indicator data that automates the data collection process. Reject analysis program (RAP) and exposure indicator data were collected and analyzed during a 1-year period. RAP data were sorted both by reason for repetition and body part examined. Data were also stratified by clinical area for further investigation. The monthly composite reject rate for our institution fluctuated between 8% and 10%. Positioning errors were the main cause of repeated images (77.3%). Stratification of data by clinical area revealed that areas where computed radiography (CR) is seldom used suffer from higher reject rates than areas where it is used frequently. S values were log-normally distributed for examinations performed under either manual or automatic exposure control. The distributions were positively skewed and leptokurtic. S value decreases due to radiologic technology student rotations, and CR plate reader calibrations were observed. Our data demonstrate that reject analysis is still necessary and useful in the era of digital imaging. It is vital though that analysis be combined with exposure indicator analysis, as digital radiography is not self-policing in terms of exposure. When combined, the two programs are a powerful tool for quality assurance.

Journal ArticleDOI
TL;DR: The BIMM system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies, and is evaluated on a set of annotated liver lesion images.
Abstract: Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations (“semantic” metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system’s “match observations” function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

Journal ArticleDOI
TL;DR: A semi-automated segmentation method for magnetic resonance images of the quadriceps muscles using an anatomically anchored, template-based initialization of the level set-based segmentation approach, which captures individual anatomical variations in the image to be segmented.
Abstract: In this paper, we present a semi-automated segmentation method for magnetic resonance images of the quadriceps muscles. Our method uses an anatomically anchored, template-based initialization of the level set-based segmentation approach. The method only requires the input of a single point from the user inside the rectus femoris. The templates are quantitatively selected from a set of images based on modes in the patient population, namely, sex and body type. For a given image to be segmented, a template is selected based on the smallest Kullback–Leibler divergence between the histograms of that image and the set of templates. The chosen template is then employed as an initialization for a level set segmentation, which captures individual anatomical variations in the image to be segmented. Images from 103 subjects were analyzed using the developed method. The algorithm was trained on a randomly selected subset of 50 subjects (25 men and 25 women) and tested on the remaining 53 subjects. The performance of the algorithm on the test set was compared against the ground truth using the Zijdenbos similarity index (ZSI). The average ZSI means and standard deviations against two different manual readers were as follows: rectus femoris, 0.78 ± 0.12; vastus intermedius, 0.79 ± 0.10; vastus lateralis, 0.82 ± 0.08; and vastus medialis, 0.69 ± 0.16.

Journal ArticleDOI
TL;DR: The purpose of this study was to ascertain the error rates of using a voice recognition (VR) dictation system and compared the results with several other articles and discussed the pros and cons of using such a system.
Abstract: The purpose of this study is to ascertain the error rates of using a voice recognition (VR) dictation system. We compared our results with several other articles and discussed the pros and cons of using such a system. The study was performed at the Southern Health Department of Diagnostic Imaging, Melbourne, Victoria using the GE RIS with Powerscribe 3.5 VR system. Fifty random finalized reports from 19 radiologists obtained between June 2008 and November 2008 were scrutinized for errors in six categories namely, wrong word substitution, deletion, punctuation, other, and nonsense phrase. Reports were also divided into two categories: computer radiography (CR = plain film) and non-CR (ultrasound, computed tomography, magnetic resonance imaging, nuclear medicine, and angiographic examinations). Errors were divided into two categories, significant but not likely to alter patient management and very significant with the meaning of the report affected, thus potentially affecting patient management (nonsense phrase). Three hundred seventy-nine finalized CR reports and 631 non-CR finalized reports were examined. Eleven percent of the reports in the CR group had errors. Two percent of these reports contained nonsense phrases. Thirty-six percent of the reports in the non-CR group had errors and out of these, 5% contained nonsense phrases. VR dictation system is like a double-edged sword. Whilst there are many benefits, there are also many pitfalls. We hope that raising the awareness of the error rates will help in our efforts to reduce error rates and strike a balance between quality and speed of reports generated.

Journal ArticleDOI
TL;DR: A completely user-independent algorithm, which automatically extracts the far double line (lumen–intima and media–adventitia) in the carotid artery using an Edge Flow technique based on directional probability maps using the attributes of intensity and texture.
Abstract: The evaluation of the carotid artery wall is essential for the diagnosis of cardiovascular pathologies or for the assessment of a patient’s cardiovascular risk. This paper presents a completely user-independent algorithm, which automatically extracts the far double line (lumen–intima and media–adventitia) in the carotid artery using an Edge Flow technique based on directional probability maps using the attributes of intensity and texture. Specifically, the algorithm traces the boundaries between the lumen and intima layer (line one) and between the media and adventitia layer (line two). The Carotid Automated Ultrasound Double Line Extraction System based on Edge-Flow (CAUDLES-EF) is characterized and validated by comparing the output of the algorithm with the manual tracing boundaries carried out by three experts. We also benchmark our new technique with the two other completely automatic techniques (CALEXia and CULEXsa) we previously published. Our multi-institutional database consisted of 300 longitudinal B-mode carotid images with normal and pathologic arteries. We compared our current new method with previous methods, and showed the mean and standard deviation for the three methods: CALEXia, CULEXsa, and CAUDLES-EF as 0.134 ± 0.088, 0.074 ± 0.092, and 0.043 ± 0.097 mm, respectively. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed a uniform behavior over the entire database. Regarding the Figure of Merit (FoM), CALEXia and CULEXsa showed the values of 84.7% and 91.5%, respectively, while our new approach, CAUDLES-EF, performed the best at 94.8%, showing a good improvement compared to previous methods.