scispace - formally typeset
Search or ask a question

Showing papers presented at "Computer Assisted Radiology and Surgery in 2013"


Journal ArticleDOI
16 Apr 2013
TL;DR: The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control.
Abstract: The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today’s and tomorrow’s clinically motivated research.

359 citations


Journal ArticleDOI
13 Apr 2013
TL;DR: A new descriptor based on the divergence of the gradient (HGD) was demonstrated to be a feasible predictor of breast masses’ diagnosis, demonstrating promising capabilities to describe masses.
Abstract: Breast cancer computer-aided diagnosis (CADx) may utilize image descriptors, demographics, clinical observations, or a combination. CADx performance was compared for several image features, clinical descriptors (e.g. age and radiologist’s observations), and combinations of both kinds of data. A novel descriptor invariant to rotation, histograms of gradient divergence (HGD), was developed to deal with round-shaped objects, such as masses. HGD was compared with conventional CADx features. HGD and 11 conventional image descriptors were evaluated using cases from two publicly available mammography data sets, the digital database for screening mammography (DDSM) and the breast cancer digital repository (BCDR), with 1,762 and 362 instances, respectively. Three experiments were done for each data set according to the type of lesion (i.e., all lesions, masses, and calcifications), resulting in six scenarios. For each scenario, 100 training and test sets were generated via resampling without replacement and five machine learning classifiers were used to assess the diagnostic performance of the descriptors. Clinical descriptors outperformed image descriptors in the DDSM sample (three out of six scenarios), and combining the two kind of descriptors was advantageous in five out of six scenarios. HGD was the best descriptor (or comparable to best) in 8 out of 12 scenarios, demonstrating promising capabilities to describe masses. The combination of clinical data and image descriptors was advantageous in most mammography CADx scenarios. A new descriptor based on the divergence of the gradient (HGD) was demonstrated to be a feasible predictor of breast masses’ diagnosis.

116 citations


Journal ArticleDOI
01 Jan 2013
TL;DR: Detection performance and speed indicate that the proposed fast detection scheme of lung nodule in chest CT images using cylindrical nodule-enhancement filter may be useful for fast detection of Lung nodules in CT images.
Abstract: Purpose Existing computer-aided detection schemes for lung nodule detection require a large number of calculations and tens of minutes per case; there is a large gap between image acquisition time and nodule detection time. In this study, we propose a fast detection scheme of lung nodule in chest CT images using cylindrical nodule-enhancement filter with the aim of improving the workflow for diagnosis in CT examinations.

100 citations


Journal ArticleDOI
23 Mar 2013
TL;DR: The proposed tablet computer-based AR system has proven helpful in assisting percutaneous interventions such as PCNL and shows benefits compared to other state-of-the-art assistance systems.
Abstract: Purpose Percutaneous nephrolithotomy (PCNL) plays an integral role in treatment of renal stones. Creating percutaneous renal access is the most important and challenging step in the procedure. To facilitate this step, we evaluated our novel mobile augmented reality (AR) system for its feasibility of use for PCNL.

86 citations


Journal ArticleDOI
01 Mar 2013
TL;DR: It is feasible to apply the proposed AR-aided design system for noninvasive implant contouring for unilateral fracture reduction and internal fixation surgery and enables a patient-specific surgical planning procedure with potentially improved efficiency.
Abstract: The objective of this work is to develop a preoperative reconstruction plate design system for unilateral pelvic and acetabular fracture reduction and internal fixation surgery, using computer graphics and augmented reality (AR) techniques, in order to respect the patient-specific morphology and to reduce surgical invasiveness, as well as to simplify the surgical procedure. Our AR-aided implant design and contouring system is composed of two subsystems: a semi-automatic 3D virtual fracture reduction system to establish the patient-specific anatomical model and a preoperative templating system to create the virtual and real surgical implants. Preoperative 3D CT data are taken as input. The virtual fracture reduction system exploits the symmetric nature of the skeletal system to build a “repaired” pelvis model, on which reconstruction plates are planned interactively. A lightweight AR environment is set up to allow surgeons to match the actual implants to the digital ones intuitively. The effectiveness of this system is qualitatively demonstrated with 6 clinical cases. Its reliability was assessed based on the inter-observer reproducibility of the resulting virtual implants. The implants designed with the proposed system were successfully applied to all cases through minimally invasive surgeries. After the treatments, no further complications were reported. The inter-observer variability of the virtual implant geometry is 0.63 mm on average with a standard deviation of 0.49 mm. The time required for implant creation with our system is 10 min on average. It is feasible to apply the proposed AR-aided design system for noninvasive implant contouring for unilateral fracture reduction and internal fixation surgery. It also enables a patient-specific surgical planning procedure with potentially improved efficiency.

71 citations


Journal ArticleDOI
01 Jan 2013
TL;DR: Although template-based approaches are the most used techniques to segment basal ganglia by warping onto MR images, it is found that the patch-based method provided similar results and was less time-consuming.
Abstract: Purpose Template-based segmentation techniques have been developed to facilitate the accurate targeting of deep brain structures in patients with movement disorders. Three template-based brain MRI segmentation techniques were compared to determine the best strategy for segmenting the deep brain structures of patients with Parkinson’s disease.

69 citations


Journal ArticleDOI
01 Jan 2013
TL;DR: A new method for the automatic detection of low-level surgical tasks, that is, the sequence of activities in a surgical procedure, from microscope video images only, based on the hypothesis that most activities occur in one or two phases only.
Abstract: Surgical process models (SPMs) have recently been created for situation-aware computer-assisted systems in the operating room. One important challenge in this area is the automatic acquisition of SPMs. The purpose of this study is to present a new method for the automatic detection of low-level surgical tasks, that is, the sequence of activities in a surgical procedure, from microscope video images only. The level of granularity that we addressed in this work is symbolized by activities formalized by triplets . Using the results of our latest work on the recognition of surgical phases in cataract surgeries, and based on the hypothesis that most activities occur in one or two phases only, we created a light-weight ontology, formalized as a hierarchical decomposition into phases and activities. Information concerning the surgical tools, the areas where tools are used and three other visual cues were detected through an image-based approach and combined with the information of the current surgical phase within a knowledge-based recognition system. Knowing the surgical phases before the activity, recognition allows supervised classification to be adapted to the phase. Multiclass Support Vector Machines were chosen as a classification algorithm. Using a dataset of 20 cataract surgeries, and identifying 25 possible pairs of activities, a frame-by-frame recognition rate of 64.5 % was achieved with the proposed system. The addition of human knowledge to traditional bottom-up approaches based on image analysis appears to be promising for low-level task detection. The results of this work could be used for the automatic indexation of post-operative videos.

64 citations


Journal ArticleDOI
01 Mar 2013
TL;DR: MTVWB and TLGWB as metabolic tumor burden measurements in 18F-FDG-PET/CT are independent prognostic markers and are significantly better than SUVmaxWB and SUVmeanWB at prognostication.
Abstract: Purpose To determine whether whole-body metabolic tumor burden, measured as either metabolic tumor volume (MTVWB) or total lesion glycolysis (TLGWB), using FDG-PET/CT is an independent prognostic marker in non-small cell lung cancer (NSCLC).

58 citations


Journal ArticleDOI
21 Apr 2013
TL;DR: Preliminary testing with one surgeon indicates that the surgery planning system, which combines stereo visualization with sophisticated haptics, has the potential to become a powerful tool for CMF surgery planning.
Abstract: Cranio-maxillofacial (CMF) surgery to restore normal skeletal anatomy in patients with serious trauma to the face can be both complex and time-consuming. But it is generally accepted that careful p ...

53 citations


Journal ArticleDOI
01 Nov 2013
TL;DR: A computer-aided diagnosis scheme that automatically measures MCW is developed to assist dentists in describing a possible osteoporotic risk and suggesting further examinations, and this method has a potential to identify asymptomatic osteopootic patients.
Abstract: Purpose Mandibular cortical width (MCW) measured on dental panoramic radiographs (DPRs) was significantly correlated with bone mineral density. We developed a computer-aided diagnosis scheme that automatically measures MCW to assist dentists in describing a possible osteoporotic risk and suggesting further examinations. Methods In our approach, potential mandible edges are detected by modified Canny edge detector. On the basis of the edge information, a contour model is selected from the reference data and is fitted to the test case by using the active contour model. The reference mental foramina positions of the model are employed as the MCW measurement locations. The MCW is measured on the basis of the grayscale profiles obtained along the lines perpendicular to the fitted mandible contour. One hundred DPRs, including 26 DPRs from osteoporotic cases, were used to evaluate our proposed scheme. Results Experimental results showed that the average MCWs for osteoporotic and control cases were 2.2 and 3.9 mm, respectively. When a threshold of 2.7 mm was applied, the sensitivity and specificity for identifying osteoporotic patients were 88.5 and 97.3 %, respectively. Conclusion An automated MCW measurement technique is feasible using DPRs, and this method has a potential to identify asymptomatic osteoporotic patients.

52 citations


Journal ArticleDOI
30 Apr 2013
TL;DR: Three-dimensional histology will improve the experimental evaluation and determination of intra-cochlear trauma after the insertion of an electrode array of a cochlear implant system and facilitates the creation of detailed and spatially correct 3D anatomical models.
Abstract: This paper presents a highly accurate cross-sectional preparation technique. The research aim was to develop an adequate imaging modality for both soft and bony tissue structures featuring high contrast and high resolution. Therefore, the advancement of an already existing microgrinding procedure was pursued. The central objectives were to preserve spatial relations and to ensure the accurate three-dimensional reconstruction of histological sections. Twelve human temporal bone specimens including middle and inner ear structures were utilized. They were embedded in epoxy resin, then dissected by serial grinding and finally digitalized. The actual abrasion of each grinding slice was measured using a tactile length gauge with an accuracy of one micrometre. The cross-sectional images were aligned with the aid of artificial markers and by applying a feature-based, custom-made auto-registration algorithm. To determine the accuracy of the overall reconstruction procedure, a well-known reference object was used for comparison. To ensure the compatibility of the histological data with conventional clinical image data, the image stacks were finally converted into the DICOM standard. The image fusion of data from temporal bone specimens’ and from non-destructive flat-panel-based volume computed tomography confirmed the spatial accuracy achieved by the procedure, as did the evaluation using the reference object. This systematic and easy-to-follow preparation technique enables the three-dimensional (3D) histological reconstruction of complex soft and bony tissue structures. It facilitates the creation of detailed and spatially correct 3D anatomical models. Such models are of great benefit for image-based segmentation and planning in the field of computer-assisted surgery as well as in finite element analysis. In the context of human inner ear surgery, three-dimensional histology will improve the experimental evaluation and determination of intra-cochlear trauma after the insertion of an electrode array of a cochlear implant system.

Journal ArticleDOI
01 Mar 2013
TL;DR: Analytical models of fluoroscopic noise to express the variance of noise as a function of gray level, a practical method to estimate the parameters of the models and a possible application to improve the performance of noise filtering are presented.
Abstract: Purpose Fluoroscopy is an invaluable tool in various medical practices such as catheterization or image-guided surgery. Patient’s screen for prolonged time requires substantial reduction in X-ray exposure: The limited number of photons generates relevant quantum noise. Denoising is essential to enhance fluoroscopic image quality and can be considerably improved by considering the peculiar noise characteristics. This study presents analytical models of fluoroscopic noise to express the variance of noise as a function of gray level, a practical method to estimate the parameters of the models and a possible application to improve the performance of noise filtering.

Journal ArticleDOI
01 May 2013
TL;DR: A surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling with the feasibility of the system in comparison with conventional facial nerve monitoring is developed.
Abstract: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. Methods Warning navigation modules were developed and integrated into a free open source software platform. To obtain high registration accuracy, we used a high-precision laser-sintered template of the patient’s bone surface to register the computed tomography (CT) images. We calculated the closest distance between the drill tip and the surface of the facial nerve during drilling. When the drill tip entered the safe regions, the navigation system provided an auditory and visual signal which differed in each safe region. To evaluate the effectiveness of the system, we performed phantom experiments for maintaining a given safe margin from the facial nerve when drilling bone models, with and without the navigation system. The error of the safe margin was measured on postoperative CT images. In real surgery, we evaluated the feasibility of the system in comparison with conventional facial nerve monitoring. Results The navigation accuracy was submillimeter for the target registration error. In the phantom study, the task with navigation ( $$0.7 \pm 0.25$$ mm) was more successful with smaller error, than the task without navigation ( $$1.37 \pm 0.39$$ mm, $$P<0.05$$ ). The clinical feasibility of the system was confirmed in three real surgeries. Conclusions This system could assist surgeons in preserving the facial nerve and potentially contribute to enhanced patient safety in the surgery.

Journal ArticleDOI
Eric M. Moult1, Tamas Ungi1, Mattea Welch1, J. Lu1, Robert McGraw1, Gabor Fichtinger1 
18 Jan 2013
TL;DR: Perk Tutor provides an improved training environment for US-guided facet joint injections on a synthetic model in a pilot study of 26 pre-medical undergraduate students, suggesting that percutaneous procedures of the lumbar spine using this augmented reality training system requires advanced training methodologies.
Abstract: Facet syndrome is a condition that may cause 15–45 % of chronic lower back pain. It is commonly diagnosed and treated using facet joint injections. This needle technique demands high accuracy, and ultrasound (US) is a potentially useful modality to guide the needle. US-guided injections, however, require physicians to interpret 2-D sonographic images while simultaneously manipulating an US probe and needle. Therefore, US-guidance for facet joint injections needs advanced training methodologies that will equip physicians with the requisite skills. We used Perk Tutor—an augmented reality training system for US-guided needle insertions—in a configuration for percutaneous procedures of the lumbar spine. In a pilot study of 26 pre-medical undergraduate students, we evaluated the efficacy of Perk Tutor training compared to traditional training. The Perk Tutor Trained group, which had access to Perk Tutor during training, had a mean success rate of 61.5 %, while the Control group, which received traditional training, had a mean success rate of 38.5 % ( $$p = 0.031$$ ). No significant differences in procedure times or needle path lengths were observed between the two groups. The results of this pilot study suggest that Perk Tutor provides an improved training environment for US-guided facet joint injections on a synthetic model.

Journal ArticleDOI
03 Feb 2013
TL;DR: SIRIO proved to be a reliable and effective tool when performing CT-guided PLBs and was especially useful for sampling small small lesions for diagnostic accuracy, as well as being more accurate for small-sized lesions than standard CT-guidance.
Abstract: Percutaneous lung biopsies (PLBs) performed for the evaluation of pulmonary masses require image guidance to avoid critical structures. A new CT navigation system (SIRIO, “Sistema robotizzato assistito per il puntamento intraoperatorio”) for PLBs was validated. The local Institutional Review Board approved this retrospective study. Image-guided PLBs in 197 patients were performed with a CT navigation system (SIRIO). The procedures were reviewed based on the number of CT scans, patients’ radiation exposure and procedural time recorded. Comparison was performed with a group of 72 patients undergoing standard CT-guided PLBs. Sensitivity, specificity and overall diagnostic accuracy were assessed in both groups. SIRIO-guided PLBs showed a significant reduction in procedure time, number of required CT scans and the radiation dose administered to patients ( $$p<0.001$$ ). In terms of diagnostic accuracy, SIRIO proved to be more accurate for small-sized lesions ( $$<$$ 20 mm) than standard CT-guidance. SIRIO proved to be a reliable and effective tool when performing CT-guided PLBs and was especially useful for sampling small ( $$<$$ 20 mm) lesions.

Journal ArticleDOI
28 Jul 2013
TL;DR: Precision and accuracy achieved with the N-wire phantom and a shallow probe are at least comparable to that obtained with other methods traditionally considered more precise.
Abstract: Freehand tracked ultrasound imaging is an inexpensive non-invasive technique used in several guided interventions. This technique requires spatial calibration between the tracker and the ultrasound image plane. Several calibration devices (a.k.a. phantoms) use N-wires that are convenient for automatic procedures since the segmentation of fiducials in the images and the localization of the middle wires in space are straightforward and can be performed in real time. The procedures reported in literature consider only the spatial position of the middle wire. We investigate if better results can be achieved if the information of all the wires is equally taken into account. We also evaluated the precision and accuracy of the implemented methods to allow comparison with other methods. We consider a cost function based on the in-plane errors between the intersection of all the wires with the image plane and their respective segmented points in the image. This cost function is minimized iteratively starting from a seed computed with a closed-form solution based on the middle wires. Mean calibration precision achieved with the N-wire phantom was about 0.5 mm using a shallow probe, and mean accuracy was around 1.4 mm with all implemented methods. Precision was about 2.0 mm using a deep probe. Precision and accuracy achieved with the N-wire phantom and a shallow probe are at least comparable to that obtained with other methods traditionally considered more precise. Calibration using N-wires can be done more consistently if the parameters are optimized with the proposed cost function.

Journal ArticleDOI
01 May 2013
TL;DR: A fully automated Computer-Aided Diagnosis System (CAD) for the diagnosis of vertebra wedge compression fracture from CT images that integrates within the clinical routine is presented.
Abstract: Purpose Lower back pain affects 80–90 % of all people at some point during their life time, and it is considered as the second most neurological ailment after headache. It is caused by defects in the discs, vertebrae, or the soft tissues. Radiologists perform diagnosis mainly from X-ray radiographs, MRI, or CT depending on the target organ. Vertebra fracture is usually diagnosed from X-ray radiographs or CT depending on the available technology. In this paper, we propose a fully automated Computer-Aided Diagnosis System (CAD) for the diagnosis of vertebra wedge compression fracture from CT images that integrates within the clinical routine. Methods We perform vertebrae localization and labeling, segment the vertebrae, and then diagnose each vertebra. We perform labeling and segmentation via coordinated system that consists of an Active Shape Model and a Gradient Vector Flow Active Contours (GVF-Snake). We propose a set of clinically motivated features that distinguish the fractured vertebra. We provide two machine learning solutions that utilize our features including a supervised learner (Neural Networks (NN)) and an unsupervised learner (K-Means). Results We validate our method on a set of fifty (thirty abnormal) Computed Tomography (CT) cases obtained from our collaborating radiology center. Our diagnosis detection accuracy using NN is 93.2 % on average while we obtained 98 % diagnosis accuracy using K-Means. Our K-Means resulted in a specificity of 87.5 % and sensitivity over 99 %. Conclusions We presented a fully automated CAD system that seamlessly integrates within the clinical work flow of the radiologist. Our clinically motivated features resulted in a great performance of both the supervised and unsupervised learners that we utilize to validate our CAD system. Our CAD system results are promising to serve in clinical applications after extensive validation.

Journal ArticleDOI
01 May 2013
TL;DR: Cloud computing was introduced to augment enterprise PACS by providing standard medical imaging services across different institutions, offering communication privacy and enabling creation of wider PACS scenarios with suitable technical solutions.
Abstract: Purpose Healthcare institutions worldwide have adopted picture archiving and communication system (PACS) for enterprise access to images, relying on Digital Imaging Communication in Medicine (DICOM) standards for data exchange. However, communication over a wider domain of independent medical institutions is not well standardized. A DICOM-compliant bridge was developed for extending and sharing DICOM services across healthcare institutions without requiring complex network setups or dedicated communication channels.

Journal ArticleDOI
19 Jan 2013
TL;DR: The application of pattern recognition techniques using 3T MR-based perfusion and metabolic features may provide incremental diagnostic value in the differentiation of common intraaxial brain tumors, such as glioblastoma versus metastasis.
Abstract: Purpose Differentiation of glioblastomas from metastases is clinical important, but may be difficult even for expert observers. To investigate the contribution of machine learning algorithms in the differentiation of glioblastomas multiforme (GB) from metastases, we developed and tested a pattern recognition system based on 3T magnetic resonance (MR) data. Materials and Methods Single and multi-voxel proton magnetic resonance spectroscopy (1H-MRS) and dynamic susceptibility contrast (DSC) MRI scans were performed on 49 patients with solitary brain tumors (35 glioblastoma multiforme and 14 metastases). Metabolic (NAA/Cr, Cho/Cr, (Lip $$+$$ Lac)/Cr) and perfusion (rCBV) parameters were measured in both intratumoral and peritumoral regions. The statistical significance of these parameters was evaluated. For the classification procedure, three datasets were created to find the optimum combination of parameters that provides maximum differentiation. Three machine learning methods were utilized: Naive-Bayes, Support Vector Machine (SVM) and $$k$$ -nearest neighbor (KNN). The discrimination ability of each classifier was evaluated with quantitative performance metrics. Results Glioblastoma and metastases were differentiable only in the peritumoral region of these lesions ( $$p<0.05$$ ). SVM achieved the highest overall performance (accuracy 98 %) for both the intratumoral and peritumoral areas. Naive-Bayes and KNN presented greater variations in performance. The proper selection of datasets plays a very significant role as they are closely correlated to the underlying pathophysiology. Conclusion The application of pattern recognition techniques using 3T MR-based perfusion and metabolic features may provide incremental diagnostic value in the differentiation of common intraaxial brain tumors, such as glioblastoma versus metastasis.

Journal ArticleDOI
17 Feb 2013
TL;DR: The virtual anatomic atlas can improve the preprocessing of skull CT scans for computer assisted craniomaxillofacial surgery planning and reach modal scores of “good” to “moderate” in most areas.
Abstract: Purpose Manual segmentation of CT datasets for preoperative planning and intraoperative navigation is a time-consuming procedure. The purpose of this study was to develop an automated segmentation procedure for the facial skeleton based on a virtual anatomic atlas of the skull, to test its practicability, and to evaluate the accuracy of the segmented objects. Materials and methods The atlas skull was created by manually segmenting an unaffected skull CT dataset. For automated segmentation of cases via IPlan cranial (BrainLAB, Germany), the atlas skull underwent projection, controlled deformation, and a facultative threshold segmentation within the individual datasets, of which 16 routine CT (13 pathologies, 3 without) were processed. The variations of the no-threshold versus threshold segmentation results compared to the original were determined. The clinical usability of the results was assessed in a multicentre evaluation. Results Compared to the original dataset, the mean accuracy was $$\le 0.6$$ mm for the threshold segmentation and 0.6–1.4 mm for the no-threshold segmentation. Comparing both methods together, the deviation was $$\le 0.2$$ mm. An isolated no-threshold segmentation of the orbital cavity alone resulted in a mean accuracy of $$\le 0.6$$ mm. With regard to clinical usability, the no-threshold method was clearly preferred, reaching modal scores of “good” to “moderate” in most areas. Limitations were seen in segmenting the TMJ, mandibular fractures, and thin bone in general. Conclusion The feasibility of automated skull segmentation was demonstrated. The virtual anatomic atlas can improve the preprocessing of skull CT scans for computer assisted craniomaxillofacial surgery planning.

Journal ArticleDOI
01 Jan 2013
TL;DR: The proposed methods can detect architectural distortion in prior mammograms taken 15 months (on the average) before clinical diagnosis of breast cancer, with a high sensitivity and a moderate number of FPs per patient.
Abstract: Purpose Architectural distortion is an important sign of early breast cancer. We present methods for computer-aided detection of architectural distortion in mammograms acquired prior to the diagnosis of breast cancer in the interval between scheduled screening sessions.

Journal ArticleDOI
01 May 2013
TL;DR: A 4D statistical model of the left ventricle using human cardiac short-axis MR images is constructed and it produces a statistical model with substantially better specificity than PCA- and ICA-based models.
Abstract: Purpose Statistical shape models have shown improved reliability and consistency in cardiac image segmentation. They incorporate a sufficient amount of a priori knowledge from the training datasets and solve some major problems such as noise and image artifacts or partial volume effect. In this paper, we construct a 4D statistical model of the left ventricle using human cardiac short-axis MR images.

Journal ArticleDOI
21 Mar 2013
TL;DR: A novel approach for the registration of pre-operative magnetic resonance images to intra-operative ultrasound images for the context of image-guided neurosurgery by relying on the maximization of gradient orientation alignment in a reduced set of high confidence locations of interest and allowing for fast, accurate, and robust registration.
Abstract: We present a novel approach for the registration of pre-operative magnetic resonance images to intra-operative ultrasound images for the context of image-guided neurosurgery Our technique relies on the maximization of gradient orientation alignment in a reduced set of high confidence locations of interest and allows for fast, accurate, and robust registration Performance is compared with multiple state-of-the-art techniques including conventional intensity-based multi-modal registration strategies, as well as other context-specific approaches All methods were evaluated on fourteen clinical neurosurgical cases with brain tumors, including low-grade and high-grade gliomas, from the publicly available MNI BITE dataset Registration accuracy of each method is evaluated as the mean distance between homologous landmarks identified by two or three experts We provide an analysis of the landmarks used and expose some of the limitations in validation brought forward by expert disagreement and uncertainty in identifying corresponding points The proposed approach yields a mean error of 257 mm across all cases (the smallest among all evaluated techniques) Additionally, it is the only evaluated technique that resolves all cases with a mean distance of less than 1 mm larger than the theoretical minimal mean distance when using a rigid transformation Finally, our proposed method provides reduced processing times with an average registration time of 076 s in a GPU-based implementation, thereby facilitating its integration into the operating room

Journal ArticleDOI
13 Jan 2013
TL;DR: A prototype of the force feedback in the microgripping manipulator system has been developed and will be useful for removing deep-seated brain tumors in future master–slave-type robotic neurosurgery.
Abstract: Purpose For the application of less invasive robotic neurosurgery to the resection of deep-seated tumors, a prototype system of a force-detecting gripper with a flexible micromanipulator and force feedback to the operating unit will be developed. Methods Gripping force applied on the gripper is detected by strain gauges attached to the gripper clip. The signal is transmitted to the amplifier by wires running through the inner tube of the manipulator. Proportional force is applied on the finger lever of the operating unit by the surgeon using a bilateral control program. A pulling force experienced by the gripper is also detected at the gripper clip. The signal for the pulling force is transmitted in a manner identical to that mentioned previously, and the proportional torque is applied on the touching roller of the finger lever of the operating unit. The surgeon can feel the gripping force as the resistance of the operating force of the finger and can feel the pulling force as the friction at the finger surface. Results A basic operation test showed that both the gripping force and pulling force were clearly detected in the gripping of soft material and that the operator could feel the gripping force and pulling force at the finger lever of the operating unit. Conclusions A prototype of the force feedback in the microgripping manipulator system has been developed. The system will be useful for removing deep-seated brain tumors in future master–slave-type robotic neurosurgery.

Journal ArticleDOI
28 Mar 2013
TL;DR: Identification of structures and navigation of the arthroscope were ranked highly in terms of importance for trainee surgeons to possess before performing in the operating room and the components of an optimal simulator.
Abstract: Our purpose was to identify what surgical skills trainees consider important to possess before performing in the operating room and the components of an optimal simulator. An online survey composed of 35 questions was completed by 67 orthopedic residents from across Canada. The questions examined the opinions of residents for their perspective on what constitutes an optimal design of an arthroscopic simulator. The average year of residency of the respondents was 3.2, and the average number of arthroscopies assisted on was 66.1 with a range of 0–300. Identification of structures and navigation of the arthroscope were ranked highly in terms of importance for trainee surgeons to possess before performing in the operating room. Higher fidelity simulation models such as cadaveric specimens or the use of synthetic knees were preferred over lower fidelity simulation models such as virtual reality simulators or bench top models. The information from trainees can be used in the development of a simulator for medical education as well as program and curriculum design. The report also highlights the importance of the pre-RCT phases leading to the development of the most effective simulation programs.

Journal ArticleDOI
09 Apr 2013
TL;DR: It is shown that the entrance point of the endoscope into the nostril could not be considered as a fixed point but rather as afixed region whose location and dimensions depend on the targeted sinus, and the best solution would be a co-manipulated standard 6-degree of freedom robot to which is attached a gimbal-like passive remote manipulator holding theendoscope.
Abstract: Design a compact, ergonomic, and safe endoscope positioner dedicated to the sino-nasal tract, and the anterior and middle-stage skull base. A motion and force analysis of the surgeon’s movement was performed on cadaver heads to gather objective data for specification purposes. An experimental comparative study was then performed with three different kinematics, again on cadaver heads, in order to define the best architecture satisfying the motion and force requirements. We quantified the maximal forces applied on the endoscope when traversing the sino-nasal tract in order to evaluate the forces that the robot should be able to overcome. We also quantified the minimal forces that should not be exceeded in order to avoid damaging vital structures. We showed that the entrance point of the endoscope into the nostril could not be considered, as in laparoscopic surgery, as a fixed point but rather as a fixed region whose location and dimensions depend on the targeted sinus. From the safety and ergonomic points of view, the best solution would be a co-manipulated standard 6-degree of freedom robot to which is attached a gimbal-like passive remote manipulator holding the endoscope.

Journal ArticleDOI
01 Jan 2013
TL;DR: The augmented reality fluoroscope achieves an accurate video andX-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP.
Abstract: Purpose The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity.

Journal ArticleDOI
09 Jan 2013
TL;DR: The experimental results demonstrated that this quantification tool could reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients.
Abstract: Purpose Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semiautomated system to quantify MRI biomarkers of GRMD. Methods Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using a semiautomated full muscle segmentation method. We then performed preprocessing, including intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted and T2-weighted fat-suppressed images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume, intensity statistics over MRI biomarker maps, and statistical image texture features. Results The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation showed significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. Conclusion The experimental results demonstrated that this quantification tool could reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients.

Journal ArticleDOI
07 May 2013
TL;DR: An AR guidance mechanism with a projector-camera system to provide the surgeon with direct visual feedback for supervision of robotic needle insertion in radiofrequency (RF) ablation treatment and the feasibility of augmented interaction with a surgical robot using the proposed open AR interface with active visual feedback was demonstrated.
Abstract: The use of projector-based augmented reality (AR) in surgery may enable surgeons to directly view anatomical models and surgical data from the patient’s surface (skin). It has the advantages of a consistent viewing focus on the patient, an extended field of view and augmented interaction. This paper presents an AR guidance mechanism with a projector-camera system to provide the surgeon with direct visual feedback for supervision of robotic needle insertion in radiofrequency (RF) ablation treatment. The registration of target organ models to specific positions on the patient body is performed using a surface-matching algorithm and point-based registration. An algorithm based on the extended Kalman filter and spatial transformation is used to intraoperatively compute the virtual needle’s depth in the patient’s body for AR display. Experiments of this AR system on a mannequin were conducted to evaluate AR visualization and accuracy of virtual RF needle insertion. The average accuracy of 1.86 mm for virtual needle insertion met the clinical requirement of 2 mm or better. The feasibility of augmented interaction with a surgical robot using the proposed open AR interface with active visual feedback was demonstrated. The experimental results demonstrate that this guidance system is effective in assisting a surgeon to perform a robot-assisted radiofrequency ablation procedure. The novelty of the work lies in establishing a navigational procedure for percutaneous surgical augmented intervention integrating a projection-based AR guidance and robotic implementation for surgical needle insertion.

Journal ArticleDOI
01 Mar 2013
TL;DR: Implementation of an automated operating room light and touch-less control using an RGBD camera for gesture tracking is feasible, the remaining tracking error does not affect smooth control, and the use of the system is intuitive even for inexperienced users.
Abstract: Purpose Today’s highly technical operating rooms lead to fairly complex surgical workflows where the surgeon has to interact with a number of devices, including the operating room light. Hence, ideally, the surgeon could direct the light without major disruption of his work. We studied whether a gesture tracking–based control of an automated operating room light is feasible.