scispace - formally typeset
Search or ask a question

Showing papers presented at "Computer Assisted Radiology and Surgery in 2016"


Journal ArticleDOI
11 Jan 2016
TL;DR: The current role of machine learning techniques in the context of surgery with a focus on surgical robotics (SR) is reviewed and a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room is provided.
Abstract: Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments.

194 citations


Journal ArticleDOI
01 Jan 2016
TL;DR: The proposed algorithm was developed and tested by combining shape and texture features to classify CXRs into two categories: TB and non-TB cases, and was able to increase the overall performance by 2.4 % over the previous work.
Abstract: Purpose To improve detection of pulmonary and pleural abnormalities caused by pneumonia or tuberculosis (TB) in digital chest X-rays (CXRs). Methods A method was developed and tested by combining shape and texture features to classify CXRs into two categories: TB and non-TB cases. Based on observation that radiologist interpretation is typically comparative: between left and right lung fields, the algorithm uses shape features to describe the overall geometrical characteristics of the lung fields and texture features to represent image characteristics inside them. Results Our algorithm was evaluated on two different datasets containing tuberculosis and pneumonia cases. Conclusions Using our proposed algorithm, we were able to increase the overall performance, measured as area under the (ROC) curve (AUC) by 2.4% over our previous work.

108 citations


Journal ArticleDOI
19 Mar 2016
TL;DR: A fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge is proposed.
Abstract: With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.

106 citations


Journal ArticleDOI
16 Jun 2016
TL;DR: The results indicated that k-means has a potential to classify BCW dataset and provided extensive understanding of the computational parameters that can be used with k-Means.
Abstract: Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2–9), and iteration (4–10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

85 citations


Journal ArticleDOI
27 Aug 2016
TL;DR: An automated system for surgical skills assessment by analyzing video data of surgical activities and results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos.
Abstract: Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.

81 citations


Journal ArticleDOI
01 Apr 2016
TL;DR: CustusX is presented as a robust, accurate, and extensible platform with full access to data and algorithms and is now a mature research platform for intraoperative navigation and ultrasound imaging and is ready for use by the IGT research community.
Abstract: Purpose CustusX is an image-guided therapy (IGT) research platform dedicated to intraoperative navigation and ultrasound imaging. In this paper, we present CustusX as a robust, accurate, and extensible platform with full access to data and algorithms and show examples of application in technological and clinical IGT research.

79 citations


Journal ArticleDOI
19 Mar 2016
TL;DR: An automatic method for screening pulmonary abnormalities using thoracic edge map in CXR images that outperforms previously reported state-of-the-art results is presented.
Abstract: Our particular motivator is the need for screening HIV+ populations in resource-constrained regions for the evidence of tuberculosis, using posteroanterior chest radiographs (CXRs). The proposed method is motivated by the observation that abnormal CXRs tend to exhibit corrupted and/or deformed thoracic edge maps. We study histograms of thoracic edges for all possible orientations of gradients in the range $$[0, 2\pi )$$ at different numbers of bins and different pyramid levels, using five different regions-of-interest selection. We have used two CXR benchmark collections made available by the U.S. National Library of Medicine and have achieved a maximum abnormality detection accuracy (ACC) of 86.36 % and area under the ROC curve (AUC) of 0.93 at 1 s per image, on average. We have presented an automatic method for screening pulmonary abnormalities using thoracic edge map in CXR images. The proposed method outperforms previously reported state-of-the-art results.

76 citations


Journal ArticleDOI
01 Feb 2016
TL;DR: In simulated periacetabular pelvic tumor resections, PSI technique enabled surgeons to reproduce the virtual surgical plan with similar accuracy but with less bone resection time when compared with navigation assistance.
Abstract: Purpose Inaccurate resection in pelvic tumors can result in compromised margins with increase local recurrence. Navigation-assisted and patient-specific instrument (PSI) techniques have recently been reported in assisting pelvic tumor surgery with the tendency of improving surgical accuracy. We examined and compared the accuracy of transferring a virtual pelvic resection plan to actual surgery using navigation-assisted or PSI technique in a cadaver study.

72 citations


Journal ArticleDOI
01 Jan 2016
TL;DR: All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming.
Abstract: Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated ( $$n = 60$$ eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student’s t tests. Neither atlas-based $$(26.63 \pm 3.15\,\hbox {mm}^{3})$$ nor model-based $$(26.87 \pm 2.99\,\hbox {mm}^{3})$$ measurements were significantly different from manual volume measurements $$(26.65 \pm 4.0\,\hbox {mm}^{3})$$ . However, the time required to determine orbital volume was significantly longer for manual measurements ( $$10.24 \pm 1.21$$ min) than for atlas-based ( $$6.96 \pm 2.62$$ min, $$p < 0.001$$ ) or model-based ( $$5.73 \pm 1.12$$ min, $$p < 0.001$$ ) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

70 citations


Journal ArticleDOI
01 Apr 2016
TL;DR: A new tendon-driven continuum robot, designed to fit existing neuroendoscopes, is presented with kinematic mapping for hysteresis operation and the extended FKM including friction in tendons can improve prediction accuracy of the postures in the hysteResis operation.
Abstract: Purpose The hysteresis operation is an outstanding issue in tendon-driven actuation—which is used in robot-assisted surgery—as it is incompatible with kinematic mapping for control and trajectory planning. Here, a new tendon-driven continuum robot, designed to fit existing neuroendoscopes, is presented with kinematic mapping for hysteresis operation.

63 citations


Journal ArticleDOI
Hao Liu1, Weikai Chen1, Zongyi Wang1, Jun Lin1, Bin Meng1, Huilin Yang1 
22 Jun 2016
TL;DR: There was no difference in the accuracy between robot-assisted and conventional freehand pedicle screw placement at the 0 mm grading criteria, and there was also no significant difference at 0mm grading criteria among percutaneous Robot-assisted technique, open robot- assisted technique and conventionalfreehand technique.
Abstract: To perform a systematic review and meta-analysis to investigate the different of accuracy between robot-assisted and conventional freehand pedicle screw placement. The electronic databases of PubMed, Ovid MEDLINE, EMBASE, and Web of Science were searched for the literatures published up to January, 2016. Statistical analysis was performed using the Review Manager 5.3. The dichotomous data for the pedicle violation rate were summarized using relative risk (RR) and 95 % confidence intervals (CIs). The level of significance was set at $$P<0.05$$ . A total of 257 patients and 1105 screws were included in the five studies for this meta-analysis. The results revealed that there was no difference in the accuracy between robot-assisted and conventional freehand pedicle screw placement at the 0 mm grading criteria (RR 1.08, 95 % CI 0.86, 1.35, $$I^{2}=28\,\%$$ , $$P=0.52$$ ) and at 2 mm grading criteria (RR 1.02, 95 % CI 0.68, 1.51, $${I}^{2}=28\,\%$$ , $$P=0.93$$ ). Among percutaneous robot-assisted technique, open robot-assisted technique and conventional freehand technique, there was also no significant difference at 0mm grading criteria (RO(P) vs FH : RR 1.10, 95 % CI 0.87, 1.40, $$I^{2}=34\,\%$$ , $$P=0.43$$ ; RO(O) versus FH : RR 0.87, 95 % CI 0.55, 1.38, $$\mathrm{I}^{2}=9\,\%$$ , $$P=0.55$$ ; RO(P) vs RO(O): RR 1.20, 95 % CI 0.65, 2.24, $$P=0.56$$ ) and at 2 mm grading criteria(RO(P) vs FH : RR 1.07, 95 % CI 0.43, 2.67, $$ I^{2}=55\,\%$$ , $$P=0.88$$ ; RO(O) vs FH : RR 0.71, 95 % CI 0.36, 1.39, $$I^{2}=0\,\%$$ , $$P=0.32$$ ; RO(P) vs RO(O) : RR 0.84, 95 % CI 0.36, 1.94, $$P=0.68$$ ). Further high-quality studies are required to unequivocally recommend one surgical technique over the other. With the more application of robot-assisted navigation system, accuracy and clinical benefit of the technique will be gradually improved.

Journal ArticleDOI
01 Jul 2016
TL;DR: Individual treatment of small and large tumors and boundary correction using the proposed sigmoid edge model can be used to develop a robust tumor segmentation algorithm which deals with any types of tumors.
Abstract: The intensity profile of an image in the vicinity of a tissue’s boundary is modeled by a step/ramp function. However, this assumption does not hold in cases of low-contrast images, heterogeneous tissue textures, and where partial volume effect exists. We propose a hybrid algorithm for segmentation of CT/MR tumors in low-contrast, noisy images having heterogeneous/homogeneous or hyper-/hypo-intense abnormalities. We also model a smoothed noisy intensity profile by a sigmoid function and employ it to find the true location of boundary more accurately. A novel combination of the SVM, watershed, and scattered data approximation algorithms is employed to initially segment a tumor. Small and large abnormalities are treated distinctly. Next, the proposed sigmoid edge model is fitted to the normal profile of the border. The estimated parameters of the model are then utilized to find true boundary of a tissue. We extensively evaluated our method using synthetic images (contaminated with varying levels of noise) and clinical CT/MR data. Clinical images included 57 CT/MR volumes consisting of small/large tumors, very low-/high-contrast images, liver/brain tumors, and hyper-/hypo-intense abnormalities. We achieved a Dice measure of $$0.83\,(\pm 0.07)$$ and average symmetric surface distance of $$2.56\,(\pm 6.31)$$ mm. Regarding IBSR dataset, we fulfilled Jaccard index of $$0.85\,(\pm 0.07)$$ . The average run-time of our code was $$154\,(\pm 71)$$ s. Individual treatment of small and large tumors and boundary correction using the proposed sigmoid edge model can be used to develop a robust tumor segmentation algorithm which deals with any types of tumors.

Journal ArticleDOI
02 Apr 2016
TL;DR: A 2D tracker based on a Generalized Hough Transform using SIFT features which can both handle complex environmental changes and recover from tracking failure and an improvement over using 3D tracking alone suggesting that combining 2D and 3Dtracking is a promising solution to challenges in surgical instrument tracking.
Abstract: Purpose Computer-assisted interventions for enhanced minimally invasive surgery (MIS) require tracking of the surgical instruments. Instrument tracking is a challenging problem in both conventional and robotic-assisted MIS, but vision-based approaches are a promising solution with minimal hardware integration requirements. However, vision-based methods suffer from drift, and in the case of occlusions, shadows and fast motion, they can be subject to complete tracking failure.

Journal ArticleDOI
01 May 2016
TL;DR: This study suggests that the proposed framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images can be good enough to replace the time-consuming and tedious manual segmentation approach.
Abstract: Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape–intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is $$1.09 \pm 0.34$$ mm (range 0.62–2.12 mm), root mean square symmetric surface distance error is $$1.72 \pm 0.46$$ mm (range 0.97–3.01 mm), and maximum symmetric surface distance error is $$18.04 \pm 3.51$$ mm (range 12.73–26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.

Journal ArticleDOI
01 Jan 2016
TL;DR: The built-in automatic method A is quick, but suboptimal for clinical use and the newly developed method SA appears to be accurate, reproducible, quick and easy to use.
Abstract: Purpose The purpose of this study was to validate a quick, accurate and reproducible (semi-) automatic software segmentation method to measure orbital volume in the unaffected bony orbit. Precise volume measurement of the orbital cavity is a useful addition to pre-operative planning and intraoperative navigation in orbital reconstruction.

Journal ArticleDOI
03 May 2016
TL;DR: This work develops a method to estimate physiological parameters in an accurate and rapid manner suited for modern high-resolution laparoscopic images by training random forest regressors using reflectance spectra generated with Monte Carlo simulations.
Abstract: Purpose Multispectral imaging can provide reflectance measurements at multiple spectral bands for each image pixel. These measurements can be used for estimation of important physiological parameters, such as oxygenation, which can provide indicators for the success of surgical treatment or the presence of abnormal tissue. The goal of this work was to develop a method to estimate physiological parameters in an accurate and rapid manner suited for modern high-resolution laparoscopic images.

Journal ArticleDOI
01 May 2016
TL;DR: A semi-automatic method that segments a brain tumor by training and generalizing within that brain only, based on some minimum user interaction is proposed, which is the second most accurate compared to published methods, while using significantly less memory and processing power than most state-of-the-art methods.
Abstract: In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem. This method has an advantage over typical machine learning methods for this task where generalization is made across brains. The problem with these methods is that they need to deal with intensity bias correction and other MRI-specific noise. In this paper, we avoid these issues by approaching the problem as one of within brain generalization. Specifically, we propose a semi-automatic method that segments a brain tumor by training and generalizing within that brain only, based on some minimum user interaction. We investigate how adding spatial feature coordinates (i.e., i, j, k) to the intensity features can significantly improve the performance of different classification methods such as SVM, kNN and random forests. This would only be possible within an interactive framework. We also investigate the use of a more appropriate kernel and the adaptation of hyper-parameters specifically for each brain. As a result of these experiments, we obtain an interactive method whose results reported on the MICCAI-BRATS 2013 dataset are the second most accurate compared to published methods, while using significantly less memory and processing power than most state-of-the-art methods.

Journal ArticleDOI
19 Mar 2016
TL;DR: The 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.
Abstract: In many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon’s view to assist accurate placement of tools. We design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization. The evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency. The 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.

Journal ArticleDOI
01 Aug 2016
TL;DR: To the authors' knowledge, this study represents the most comprehensive clinical evaluation of a deformation correction pipeline for image-guided neurosurgery.
Abstract: Purpose Brain shift during neurosurgical procedures must be corrected for in order to reestablish accurate alignment for successful image-guided tumor resection. Sparse-data-driven biomechanical models that predict physiological brain shift by accounting for typical deformation-inducing events such as cerebrospinal fluid drainage, hyperosmotic drugs, swelling, retraction, resection, and tumor cavity collapse are an inexpensive solution. This study evaluated the robustness and accuracy of a biomechanical model-based brain shift correction system to assist with tumor resection surgery in 16 clinical cases.

Journal ArticleDOI
01 Jul 2016
TL;DR: Cephalometric measurements computed from automatic detection of landmarks on 3D CBCT image were as accurate as those computed from manual identification.
Abstract: To evaluate the accuracy of three-dimensional cephalometric measurements obtained through an automatic landmark detection algorithm compared to those obtained through manual identification. The study demonstrates a comparison of 51 cephalometric measurements (28 linear, 16 angles and 7 ratios) on 30 CBCT (cone beam computed tomography) images. The analysis was performed to compare measurements based on 21 cephalometric landmarks detected automatically and those identified manually by three observers. Inter-observer ICC for each landmark was found to be excellent ( $${>}0.9$$ ) among three observers. The unpaired t-test revealed that there was no statistically significant difference in the measurements based on automatically detected and manually identified landmarks. The difference between the manual and automatic observation for each measurement was reported as an error. The highest mean error in the linear and angular measurements was found to be 2.63 mm ( $$\hbox {Or}_{\mathrm{L}}\hbox {-Or}_{\mathrm{R}}$$ distance) and $$2.12^{\circ }$$ ( $$\hbox {Co}_{\mathrm{L}}\hbox {-Go}_{\mathrm{L}}$$ -Me angle), respectively. The highest mean error in the group of distance ratios was 0.03 (for N-Me/N-ANS and $$\hbox {Go}_{\mathrm{R}}\hbox {-Gn/S-N}$$ ). Cephalometric measurements computed from automatic detection of landmarks on 3D CBCT image were as accurate as those computed from manual identification.

Journal ArticleDOI
19 Mar 2016
TL;DR: Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery, and it is shown that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Abstract: Purpose Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate “hand–eye” calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers.

Journal ArticleDOI
25 Feb 2016
TL;DR: The stronger small-world property in the tumor patients proves the existence of compensatory mechanism and changes in the regional properties, especially the betweenness centrality and vulnerability, aid in understanding the brain structural plasticity.
Abstract: Brain tumor patients are usually accompanied by impairments in cognitive functions, and these dysfunctions arise from the altered diffusion tensor of water molecules and disrupted neuronal conduction in white matter. Diffusion tensor imaging (DTI) is a powerful noninvasive imaging technique that can reflect diffusion anisotropy of water and brain white matter neural connectivity in vivo. This study was aimed to analyze the topological properties and connection densities of the brain anatomical networks in brain tumor patients based on DTI and provide new insights into the investigation of the structural plasticity and compensatory mechanism of tumor patient’s brain. In this study, the brain anatomical networks of tumor patients and healthy controls were constructed using the tracking of white matter fiber bundles based on DTI and the topological properties of these networks were described quantitatively. The statistical comparisons were performed between two groups with six DTI parameters: degree, regional efficiency, local efficiency, clustering coefficient, vulnerability, and betweenness centrality. In order to localize changes in structural connectivity to specific brain regions, a network-based statistic approach was utilized. By comparing the edge connection density of brain network between two groups, the edges with greater difference in connection density were associated with three functional systems. Compared with controls, tumor patients show a significant increase in small-world feature of cerebral structural network. Two-sample two-tailed t test indicates that the regional properties are altered in 17 regions ( $$p<0.05$$ ). Study reveals that the positive and negative changes in vulnerability take place in the 14 brain areas. In addition, tumor patients lose 3 hub regions and add 2 new hubs when compared to normal controls. Eleven edges show much significantly greater connection density in the patients than in the controls. Most of the edges with greater connection density are linked to regions located in the limbic/subcortical and other systems. Besides, most of the edges connect the two hemispheres of the brains. The stronger small-world property in the tumor patients proves the existence of compensatory mechanism. The changes in the regional properties, especially the betweenness centrality and vulnerability, aid in understanding the brain structural plasticity. The increased connection density in the tumor group suggests that tumors may induce reorganization in the structural network.

Journal ArticleDOI
20 Jun 2016
TL;DR: Overall, the current study finds that volBrain is superior in thalamus and hippocampus segmentation compared to FSL, FreeSurfer and SPM, and the choice of segmentation technique and training library affects quantitative results from diffusivity measures inThalamusand hippocampus.
Abstract: In both structural and functional MRI, there is a need for accurate and reliable automatic segmentation of brain regions. Inconsistent segmentation reduces sensitivity and may bias results in clinical studies. The current study compares the performance of publicly available segmentation tools and their impact on diffusion quantification, emphasizing the importance of using recently developed segmentation algorithms and imaging techniques. Four publicly available, automatic segmentation methods (volBrain, FSL, FreeSurfer and SPM) are compared to manual segmentation of the thalamus and hippocampus imaged with a recently proposed T1-weighted MRI sequence (MP2RAGE). We evaluate morphometric accuracy on 22 healthy subjects and impact on diffusivity measurements obtained from aligned diffusion-weighted images on a subset of 10 subjects. Compared to manual segmentation, the highest Dice similarity index of the thalamus is obtained with volBrain using a local library ( $$M=0.913$$ , $$\hbox {SD}=0.014$$ ) followed by volBrain using an external library ( $$M=0.868$$ , $$\hbox {SD}=0.024$$ ), FSL ( $${M}=0.806$$ , $$\mathrm{SD}=0.034$$ ), FreeSurfer ( $${M}=0.798$$ , $$\mathrm{SD}=0.049$$ ) and SPM ( $${M}=0.787$$ , $$\mathrm{SD}=0.031$$ ). The same order is found for hippocampus with volBrain local ( $${M}=0.892$$ , $$\mathrm{SD}=0.016$$ ), volBrain external ( $${M}=0.859$$ , $$\mathrm{SD}=0.014$$ ), FSL ( $${M}=0.808$$ , $$\mathrm{SD}=0.017$$ ), FreeSurfer ( $${M}=0.771$$ , $$\mathrm{SD}=0.023$$ ) and SPM ( $${M}=0.735$$ , $$\mathrm{SD}=0.038$$ ). For diffusivity measurements, volBrain provides values closest to those obtained from manual segmentations. volBrain is the only method where FA values do not differ significantly from manual segmentation of the thalamus. Overall we find that volBrain is superior in thalamus and hippocampus segmentation compared to FSL, FreeSurfer and SPM. Furthermore, the choice of segmentation technique and training library affects quantitative results from diffusivity measures in thalamus and hippocampus.

Journal ArticleDOI
01 Apr 2016
TL;DR: Surgeons showed the potential to adapt to delay and may be trained to improve their telesurgical performance at lower-latency levels and gradually increasing latency has a growing impact on performances.
Abstract: To determine the impact of communication latency on telesurgical performance using the robotic simulator dV-Trainer $$^{\textregistered }$$ Surgeons were enrolled during three robotic congresses. They were randomly assigned to a delay group (ranging from 100 to 1000 ms). Each group performed three times a set of four exercises on the simulator: the first attempt without delay (Base) and the last two attempts with delay (Warm-up and Test). The impact of different levels of latency was evaluated. Thirty-seven surgeons were involved. The different latency groups achieved similar baseline performance with a mean task completion time of 207.2 s ( $$p>0.05$$ ). In the Test stage, the task duration increased gradually from 156.4 to 310.7 s as latency increased from 100 to 500 ms. In separate groups, the task duration deteriorated from Base for latency stages at delays $$\ge $$ 300 ms, and the errors increased at 500 ms and above ( $$p\,<$$ 0.05). The subjects’ performance tended to improve from the Warm-up to the Test period. Few subjects completed the tasks with a delay higher than 700 ms. Gradually increasing latency has a growing impact on performances. Measurable deterioration of performance begins at 300 ms. Delays higher than 700 ms are difficult to manage especially in more complex tasks. Surgeons showed the potential to adapt to delay and may be trained to improve their telesurgical performance at lower-latency levels.

Journal ArticleDOI
20 Apr 2016
TL;DR: A novel dual-robot framework for robotic needle insertions under robotic ultrasound guidance that allows robotic needle insertion to target a preoperatively defined region of interest while enabling real-time visualization and adaptive trajectory planning to provide safe and quick interactions.
Abstract: Precise needle placement is an important task during several medical procedures. Ultrasound imaging is often used to guide the needle toward the target region in soft tissue. This task remains challenging due to the user’s dependence on image quality, limited field of view, moving target, and moving needle. In this paper, we present a novel dual-robot framework for robotic needle insertions under robotic ultrasound guidance. We integrated force-controlled ultrasound image acquisition, registration of preoperative and intraoperative images, vision-based robot control, and target localization, in combination with a novel needle tracking algorithm. The framework allows robotic needle insertion to target a preoperatively defined region of interest while enabling real-time visualization and adaptive trajectory planning to provide safe and quick interactions. We assessed the framework by considering both static and moving targets embedded in water and tissue-mimicking gelatin. The presented dual-robot tracking algorithms allow for accurate needle placement, namely to target the region of interest with an error around 1 mm. To the best of our knowledge, we show the first use of two independent robots, one for imaging, the other for needle insertion, that are simultaneously controlled using image processing algorithms. Experimental results show the feasibility and demonstrate the accuracy and robustness of the process.

Journal ArticleDOI
08 Apr 2016
TL;DR: Using temporal ultrasound data in a fusion prostate biopsy study, a high classification accuracy was achieved specifically for moderately scored mp-MRI targets, which are clinically common and contribute to the high false-positive rates associated with mp- MRI for prostate cancer detection.
Abstract: This paper presents the results of a large study involving fusion prostate biopsies to demonstrate that temporal ultrasound can be used to accurately classify tissue labels identified in multi-parametric magnetic resonance imaging (mp-MRI) as suspicious for cancer. We use deep learning to analyze temporal ultrasound data obtained from 255 cancer foci identified in mp-MRI. Each target is sampled in axial and sagittal planes. A deep belief network is trained to automatically learn the high-level latent features of temporal ultrasound data. A support vector machine classifier is then applied to differentiate cancerous versus benign tissue, verified by histopathology. Data from 32 targets are used for the training, while the remaining 223 targets are used for testing. Our results indicate that the distance between the biopsy target and the prostate boundary, and the agreement between axial and sagittal histopathology of each target impact the classification accuracy. In 84 test cores that are 5 mm or farther to the prostate boundary, and have consistent pathology outcomes in axial and sagittal biopsy planes, we achieve an area under the curve of 0.80. In contrast, all of these targets were labeled as moderately suspicious in mp-MR. Using temporal ultrasound data in a fusion prostate biopsy study, we achieved a high classification accuracy specifically for moderately scored mp-MRI targets. These targets are clinically common and contribute to the high false-positive rates associated with mp-MRI for prostate cancer detection. Temporal ultrasound data combined with mp-MRI have the potential to reduce the number of unnecessary biopsies in fusion biopsy settings.

Journal ArticleDOI
19 Mar 2016
TL;DR: A novel position-based dynamics implementation of soft tissue deformation has been shown to facilitate several desirable simulation characteristics: real-time performance, unconditional stability, rapid model construction enabling patient-specific behaviour and accuracy with respect to reference CT images.
Abstract: Purpose To assist the rehearsal and planning of robot-assisted partial nephrectomy, a real-time simulation platform is presented that allows surgeons to visualise and interact with rapidly constructed patient-specific biomechanical models of the anatomical regions of interest. Coupled to a framework for volumetric deformation, the platform furthermore simulates intracorporeal 2D ultrasound image acquisition, using preoperative imaging as the data source. This not only facilitates the planning of optimal transducer trajectories and viewpoints, but can also act as a validation context for manually operated freehand 3D acquisitions and reconstructions.

Journal ArticleDOI
13 Jan 2016
TL;DR: A tractography algorithm using a two-tensor unscented Kalman filter (UKF) to improve the modeling of the corticospinal tract (CST) by tracking through regions of peritumoral edema and crossing fibers in brain tumor patients is presented.
Abstract: The aim of this study was to present a tractography algorithm using a two-tensor unscented Kalman filter (UKF) to improve the modeling of the corticospinal tract (CST) by tracking through regions of peritumoral edema and crossing fibers. Ten patients with brain tumors in the vicinity of motor cortex and evidence of significant peritumoral edema were retrospectively selected for the study. All patients underwent 3-T magnetic resonance imaging (MRI) including functional MRI (fMRI) and a diffusion-weighted data set with 31 directions. Fiber tracking was performed using both single-tensor streamline and two-tensor UKF tractography methods. A two-region-of-interest approach was used to delineate the CST. Results from the two tractography methods were compared visually and quantitatively. fMRI was applied to identify the functional fiber tracts. Single-tensor streamline tractography underestimated the extent of tracts running through the edematous areas and could only track the medial projections of the CST. In contrast, two-tensor UKF tractography tracked fanning projections of the CST despite peritumoral edema and crossing fibers. Based on visual inspection, the two-tensor UKF tractography delineated tracts that were closer to motor fMRI activations, and it was apparently more sensitive than single-tensor streamline tractography to define the tracts directed to the motor sites. The volume of the CST was significantly larger on two-tensor UKF than on single-tensor streamline tractography ( $$p < 0.001$$ ). Two-tensor UKF tractography tracks a larger volume CST than single-tensor streamline tractography in the setting of peritumoral edema and crossing fibers in brain tumor patients.

Journal ArticleDOI
01 May 2016
TL;DR: The proposed surgical navigation system can provide CT-derived patient anatomy aligned to the laparoscopic view in real time during surgery and enables accurate identification of vascular anatomy as a guide to vessel clamping prior to total or partial gastrectomy.
Abstract: Knowledge of the specific anatomical information of a patient is important when planning and undertaking laparoscopic surgery due to the restricted field of view and lack of tactile feedback compared to open surgery. To assist this type of surgery, we have developed a surgical navigation system that presents the patient’s anatomical information synchronized with the laparoscope position. This paper presents the surgical navigation system and its clinical application to laparoscopic gastrectomy for gastric cancer. The proposed surgical navigation system generates virtual laparoscopic views corresponding to the laparoscope position recorded with a three-dimensional (3D) positional tracker. The virtual laparoscopic views are generated from preoperative CT images. A point-based registration aligns coordinate systems between the patient’s anatomy and image coordinates. The proposed navigation system is able to display the virtual laparoscopic views using the registration result during surgery. We performed surgical navigation during laparoscopic gastrectomy in 23 cases. The navigation system was able to present the virtual laparoscopic views in synchronization with the laparoscopic position. The fiducial registration error was calculated in all 23 cases, and the average was 14.0 mm (range 6.1–29.8). The proposed surgical navigation system can provide CT-derived patient anatomy aligned to the laparoscopic view in real time during surgery. This system enables accurate identification of vascular anatomy as a guide to vessel clamping prior to total or partial gastrectomy.

Journal ArticleDOI
01 Jan 2016
TL;DR: A gesture set to control basic functions of intervention software such as gestures for 2D image exploration, 3D object manipulation and selection and is well suited to become an integral part of future interventional suites.
Abstract: The interaction with interventional imaging systems within a sterile environment is a challenging task for physicians. Direct physician–machine interaction during an intervention is rather limited because of sterility and workspace restrictions. We present a gesture-controlled projection display that enables a direct and natural physician–machine interaction during computed tomography (CT)-based interventions. Therefore, a graphical user interface is projected on a radiation shield located in front of the physician. Hand gestures in front of this display are captured and classified using a leap motion controller. We propose a gesture set to control basic functions of intervention software such as gestures for 2D image exploration, 3D object manipulation and selection. Our methods were evaluated in a clinically oriented user study with 12 participants. The results of the performed user study confirm that the display and the underlying interaction concept are accepted by clinical users. The recognition of the gestures is robust, although there is potential for improvements. The gesture training times are less than 10 min, but vary heavily between the participants of the study. The developed gestures are connected logically to the intervention software and intuitive to use. The proposed gesture-controlled projection display counters current thinking, namely it gives the radiologist complete control of the intervention software. It opens new possibilities for direct physician–machine interaction during CT-based interventions and is well suited to become an integral part of future interventional suites.