scispace - formally typeset
Search or ask a question

Showing papers by "Septimiu E. Salcudean published in 2021"


Journal ArticleDOI
TL;DR: A robust technique to enhance ultrasonic needle visibility, especially for steeply inserted hand-held needles, and while maintaining clinical utility requirements is needed.
Abstract: This scoping review covers needle visualization and localization techniques in ultrasound, where localization-based approaches mostly aim to compute the needle shaft (and tip) location while potentially enhancing its visibility too. A literature review is conducted on the state-of-the-art techniques, which could be divided into five categories: (1) signal and image processing-based techniques to augment the needle, (2) modifications to the needle and insertion to help with needle-transducer alignment and visibility, (3) changes to ultrasound image formation, (4) motion-based analysis and (5) machine learning. Advantages, limitations and challenges of representative examples in each of the categories are discussed. Evaluation techniques performed in ex vivo, phantom and in vivo studies are discussed and summarized. Greatest limitation of the majority of the literature is that they rely on original visibility of the needle in the static image. Need for additional/improved apparatus is the greatest limitation toward clinical utility in practice. Ultrasound-guided needle placement is performed in many clinical applications, including biopsies, treatment injections and anesthesia. Despite the wide range and long history of this technique, an ongoing challenge is needle visibility in ultrasound. A robust technique to enhance ultrasonic needle visibility, especially for steeply inserted hand-held needles, and while maintaining clinical utility requirements is needed.

19 citations


Journal ArticleDOI
TL;DR: 3D Shear Wave Absolute Vibro-Elastography with matrix array has the potential to deliver a similar assessment of liver fibrosis as MRE in a more accessible, inexpensive way, to a broader set of patients.
Abstract: Magnetic resonance elastography (MRE) is commonly regarded as the imaging-based gold-standard for liver fibrosis staging, comparable to biopsy. While ultrasound-based elastography methods for liver fibrosis staging have been developed, they are confined to a 1D or a 2D region of interest and to a limited depth. 3D Shear Wave Absolute Vibro-Elastography (S-WAVE) is a steady-state, external excitation, volumetric elastography technique that is similar to MRE, but has the additional advantage of multi-frequency excitation. We present a novel ultrasound matrix array implementation of S-WAVE that takes advantage of 3D imaging. We use a matrix array transducer to sample axial multi-frequency steady-state tissue motion over a volume, using a Color Power Angiography sequence. Tissue motion with the frequency components {40,50,60} and {45,55,65} Hz are acquired over a (90° lateral) $\times $ (40° elevational) $\times $ (16 cm depth) sector with an acquisition time of 12 seconds. We compute the elasticity map in 3D using local spatial frequency estimation. We characterize this new approach in tissue phantoms against measurements obtained with transient elastography and MRE. Six healthy volunteers and eight patients with chronic liver disease were imaged. Their MRE and S-WAVE volumes were aligned using T1 to B-mode registration for direct comparison in common regions of interest. S-WAVE and MRE results are correlated with R2 = 0.92, while MRE and TE results are correlated with R2 = 0.71. Our findings show that S-WAVE with matrix array has the potential to deliver a similar assessment of liver fibrosis as MRE in a more accessible, inexpensive way, to a broader set of patients.

16 citations


Journal ArticleDOI
TL;DR: Instrument-tissue interaction forces in minimally invasive surgery (MIS) provide valuable information that can be used to provide haptic perception, monitor tissue trauma, and develop training guidelin this paper.
Abstract: Instrument–tissue interaction forces in minimally invasive surgery (MIS) provide valuable information that can be used to provide haptic perception, monitor tissue trauma, develop training guidelin...

10 citations


Journal ArticleDOI
18 Feb 2021
TL;DR: The notion of “automation for surgical manual execution” is proposed where it is argued that autonomous robotic surgery research can be used as a tool for surgeons to discover novel manual execution models that can significantly improve their surgical practice.
Abstract: Robots can perform multiple tasks in parallel. This work is about leveraging this capability in automating multilateral surgical subtasks. In particular, we explore, in a simulation study, the benefits of considering this parallelism capability in developing execution models for autonomous robotic surgery. We apply our work to two surgical subtask categories: (i) coupled-motion subtasks, where multiple robot arms share the same resources to perform the subtask, and (ii) decoupled-motion subtasks, where each robot arm executes its part of the task independently from the others. We propose and develop parallel execution models for the surgical debridement subtask, a representative of the first category, and the multi-throw suturing subtask, a representative of the second one. Comparing these parallel execution models to the state-of-the-art ones shows significant reductions in the subtasks completion time by at least 40%. In 20 trials, our results show that our proposed model for the surgical debridement subtask, that uses hierarchical concurrent state machines, provides a parallel execution framework that is efficient while greatly reducing collisions between the arms compared to a naive parallel execution model without coordination. We also show how applying parallelism can lead to execution models that go beyond the normal practice of human surgeons. We finally propose the notion of “automation for surgical manual execution” where we argue that autonomous robotic surgery research can be used as a tool for surgeons to discover novel manual execution models that can significantly improve their surgical practice.

7 citations


Journal ArticleDOI
TL;DR: Generative adversarial networks are trained to mimic both the effect of temporal averaging and of singular value decomposition (SVD) denoising, which effectively removes noise and acquisition artifacts and improves signal-to-noise ratio (SNR) in both the radio-frequency (RF) data and in the corresponding photoacoustic reconstructions.
Abstract: We have trained generative adversarial networks (GANs) to mimic both the effect of temporal averaging and of singular value decomposition (SVD) denoising. This effectively removes noise and acquisition artifacts and improves signal-to-noise ratio (SNR) in both the radio-frequency (RF) data and in the corresponding photoacoustic reconstructions. The method allows a single frame acquisition instead of averaging multiple frames, reducing scan time and total laser dose significantly. We have tested this method on experimental data, and quantified the improvement over using either SVD denoising or frame averaging individually for both the RF data and the reconstructed images. We achieve a mean squared error (MSE) of 0.05%, structural similarity index measure (SSIM) of 0.78, and a feature similarity index measure (FSIM) of 0.85 compared to our ground-truth RF results. In the subsequent reconstructions using the denoised data we achieve a MSE of 0.05%, SSIM of 0.80, and a FSIM of 0.80 compared to our ground-truth reconstructions.

7 citations


Book ChapterDOI
27 Sep 2021
TL;DR: In this article, a joint generation and segmentation strategy was proposed to learn a segmentation model with better generalization capability to domains that have no labelled data, which leverages the availability of labeled data in a different domain.
Abstract: Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays. Therefore, the topic has been the subject of a number of recent papers in the CAI community. Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data. However, labelled surgical data is of limited availability and is a bottleneck in surgical translation of these methods. In this paper, we demonstrate the limited generalizability of these methods on different datasets, including robot-assisted surgeries on human subjects. We then propose a novel joint generation and segmentation strategy to learn a segmentation model with better generalization capability to domains that have no labelled data. The method leverages the availability of labelled data in a different domain. The generator does the domain translation from the labelled domain to the unlabelled domain and simultaneously, the segmentation model learns using the generated data while regularizing the generative model. We compared our method with state-of-the-art methods and showed its generalizability on publicly available datasets and on our own recorded video frames from robot-assisted prostatectomies. Our method shows consistently high mean Dice scores on both labelled and unlabelled domains when data is available only for one of the domains.

6 citations


Book ChapterDOI
27 Sep 2021
TL;DR: In this article, a conditional generative adversarial network (GAN) is used to generate consistent treatment plans from a large pool of successful clinical data (961 patients) for low-dose-rate prostate brachytherapy (LDR-PB).
Abstract: Treatment planning in low-dose-rate prostate brachytherapy (LDR-PB) aims to produce arrangement of implantable radioactive seeds that deliver a minimum prescribed dose to the prostate whilst minimizing toxicity to healthy tissues. There can be multiple seed arrangements that satisfy this dosimetric criterion, not all deemed ‘acceptable’ for implant from a physician’s perspective. This leads to plans that are subjective where quality of treatment depends on the expertise of the planner. We propose a method that learns to generate consistent treatment plans from a large pool of successful clinical data (961 patients). Our model is based on conditional generative adversarial networks that use a novel loss function for penalizing the model on spatial constraints of the seeds. An optional optimizer based on a simulated annealing (SA) algorithm can be used to further fine-tune the plans if necessary (determined by the treating physician). Performance analysis was conducted on 150 test cases demonstrating comparable results to that of the manual plans. On average, the clinical target volume covered by \(100\%\) of the prescribed dose was \(98.9\%\) for our method compared to \(99.4\%\) for manual plans. Moreover, using our model, the planning time was significantly reduced to an average of 3 s/plan (2.5 min/plan with the optional SA). Compared to this, manual planning at our centre takes around 20 min/plan.

5 citations


Journal ArticleDOI
TL;DR: The proposed configurable, modular, and compact electronics lead to performance characteristics that cannot be reached by currently available sensors: ultra-low noise with average noise power spectral density and hardware latency of less than 100.
Abstract: In this article, we present the novel hardware and software architecture of a smart optical force–torque sensor. The proposed configurable, modular, and compact electronics lead to performance characteristics that cannot be reached by currently available sensors: ultra-low noise with average noise power spectral density of 15 nV/ $\sqrt{\text{Hz}}$ over a signal bandwidth of 500 Hz, a resolution of 0.0001% full scale at a 95% confidence level, and a hardware latency of less than 100 $\mu$ s. Performance is achieved by local synchronized oversampling of the sensor's optical transducers and parallel hardware processing of the sensor data using a field-programmable gate array (FPGA). The FPGA's reconfigurability provides for easy customization and updates; for example, by increasing the FPGA system clock rate to a maximum of 160 MHz, latency can be decreased to 50 $\mu$ s, limited by the current analog-to-digital converter. Furthermore, the approach is generic and could be duplicated with other types of transducers. An inertial measurement unit and a temperature sensor are integrated into the sensor electronics for gravity, inertia, and temperature compensations. Two software development kits that allow for the use of the sensor and its integration into the robot operating system have been developed and are discussed.

5 citations


Journal ArticleDOI
TL;DR: In this paper, a markerless AR guidance system for robot-assisted laparoscopic radical prostatectomy (RALRP) is presented, which transforms medical data from transrectal ultrasound (TRUS) to endoscope camera image.
Abstract: Intra-operative augmented reality (AR) during surgery can mitigate incomplete cancer removal by overlaying the anatomical boundaries extracted from medical imaging data onto the camera image. In this paper, we present the first such completely markerless AR guidance system for robot-assisted laparoscopic radical prostatectomy (RALRP) that transforms medical data from transrectal ultrasound (TRUS) to endoscope camera image. Moreover, we reduce the total number of transformations by combining the hand–eye and camera calibrations in a single step. Our proposed solution requires two transformations: TRUS to robot, $$^\mathrm{DV}T_\mathrm{TRUS}$$ , and camera projection matrix, $$\mathbf{M} $$ (i.e., the transformation from endoscope to camera image frame). $$^\mathrm{DV}T_\mathrm{TRUS}$$ is estimated by the method proposed in Mohareri et al. (in J Urol 193(1):302–312, 2015). $$\mathbf{M} $$ is estimated by selecting corresponding 3D-2D data points in the endoscope and the image coordinate frame, respectively, by using a CAD model of the surgical instrument and a preoperative camera intrinsic matrix with an assumption of a projective camera. The parameters are estimated using Levenberg–Marquardt algorithm. Overall mean re-projection errors (MRE) are reported using simulated and real data using a water bath. We show that $$\mathbf{M} $$ can be re-estimated if the focus is changed during surgery. Using simulated data, we received an overall MRE in the range of 11.69–13.32 pixels for monoscopic and stereo left and right cameras. For the water bath experiment, the overall MRE is in the range of 26.04–30.59 pixels for monoscopic and stereo cameras. The overall system error from TRUS to camera world frame is 4.05 mm. Details of the procedure are given in supplementary material. We demonstrate a markerless AR guidance system for RALRP that does not need calibration markers and thus has the capability to re-estimate the camera projection matrix if it changes during surgery, e.g., due to a focus change.

5 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a six-axis optical sensor that employs pairs of light-emitting diodes (LEDs) and bicell photodetectors, and corresponding slits that modulate the projected LED light onto the photodets in response to external forces.
Abstract: In this article, we present a novel six-axis optical sensor that employs pairs of light-emitting diodes (LEDs) and bicell photodetectors, and corresponding slits that modulate the projected LED light onto the photodetectors in response to external forces. The sensor can be clamped on and off a structure and relies upon the compliance of the structure for force estimation; it has no flexible components, and therefore, it is robust to overload. The mechatronic design features low noise, wide dynamic range opto-electronics, and signal conditioning, coupled with colocated digital electronics based on a FPGA that samples all sensing channels synchronously, enabling very low noise displacement sensing with resolution of 1.62 nm, low measurement signal latency of 100 $\mu$ s, high measurement bandwidth of 500 Hz, and high data transfer rates in excess of 11.5 kHz for transmission of six axis transducer data to a host computer. The transducer's resolution is better than 0.0001% of the full-scale. A sensor model has been derived and can be used to explore design tradeoffs. A calibration approach based on an external reference sensor and an approach to temperature compensation are presented and validated. A video is attached.

4 citations


Book ChapterDOI
27 Sep 2021
TL;DR: Real-Time Rotated (ReTRo) as mentioned in this paper is a convolutional feature descriptor that learns a sampling pattern as part of the network, in addition to being the first real-time learned descriptor for surgery.
Abstract: Many descriptors exist that are usable in real-time and tailored for indoor and outdoor tracking and mapping, with a small subset of these being learned descriptors. In order to enable the same in deformable surgical environments without ground truth data, we propose a Real-Time Rotated descriptor, ReTRo, that can be trained in a weakly-supervised manner using stereo images. We propose a novel network that creates these fast, high-quality descriptors that have the option to be binary-valued. ReTRo is the first convolutional feature descriptor to learn a sampling pattern as part of the network, in addition to being the first real-time learned descriptor for surgery. ReTRo runs on multiple scales and has a large receptive field while only requiring small patches for input, affording it great speed. We quantify ReTRo by using it for pose estimation and tissue tracking, demonstrating its efficacy and real-time speed. ReTRo outperforms classical descriptors used in surgery and it will enable surgical tracking and mapping frameworks.

Journal ArticleDOI
TL;DR: In this paper, a multiparametric 3D weighted QUS (3D QUS) method was proposed, involving the reconstruction of three QUS parameters: attenuation coefficient estimate (ACE), integrated backscatter coefficient (IBC), and effective scatterer diameter (ESD).

Journal ArticleDOI
TL;DR: In this paper, a conditional generative adversarial network (GAN) is used to learn a center's planning strategy and automatically reproduce rapid clinically acceptable plans for low-dose-rate prostate brachytherapy (LDR-PB).
Abstract: In low-dose-rate prostate brachytherapy (LDR-PB), treatment planning is the process of determining the arrangement of implantable radioactive sources that radiates the prostate while sparing healthy surrounding tissues. Currently, these plans are prepared manually by experts incorporating the centre’s planning style and guidelines. In this article, we develop a novel framework that can learn a centre’s planning strategy and automatically reproduce rapid clinically acceptable plans. The proposed framework is based on conditional generative adversarial networks that learn our centre’s planning style using a pool of 931 historical LDR-PB planning data. Two additional losses that help constrain prohibited needle patterns and produce similar-looking plans are also proposed. Once trained, this model generates an initial distribution of needles which is passed to a planner. The planner then initializes the sources based on the predicted needles and uses a simulated annealing algorithm to optimize their locations further. Quantitative analysis was carried out on 170 cases which showed the generated plans having similar dosimetry to that of the manual plans but with significantly lower planning durations. Indeed, on the test cases, the clinical target volumes achieving $$100\%$$ of the prescribed dose for the generated plans was on average $$98.98\%$$ ( $$99.36\%$$ for manual plans) with an average planning time of $$3.04\pm 1.1$$ min ( $$20\pm 10$$ min for manual plans). Further qualitative analysis was conducted by an expert planner who accepted $$90\%$$ of the plans with some changes ( $$60\%$$ requiring minor changes & $$30\%$$ requiring major changes). The proposed framework demonstrated the ability to rapidly generate quality treatment plans that not only fulfil the dosimetric requirements but also takes into account the centre’s planning style. Adoption of such a framework would save significant amount of time and resources spent on every patient; boosting the overall operational efficiency of this treatment.

Journal ArticleDOI
TL;DR: In this paper, an iterative model-based algorithm based on the variance-reduced stochastic gradient descent (VR-SGD) method is implemented for photoacoustic tomography (PAT) imaging with linear arrays.
Abstract: Significance As linear array transducers are widely used in clinical ultrasound imaging, photoacoustic tomography (PAT) with linear arrays is similarly suitable for clinical applications. However, due to the limited-view problem, a linear array has limited performance and leads to artifacts and blurring, which has hindered its broader application. There is a need to address the limited-view problem in PAT imaging with linear arrays. Aim We investigate potential approaches for improving PAT reconstruction from linear array, by optimizing the detection geometry and implementing iterative reconstruction. Approach PAT imaging with a single-array, dual-probe configurations in parallel-shape and L-shape, and square-shape configuration are compared in simulations and phantom experiments. An iterative model-based algorithm based on the variance-reduced stochastic gradient descent (VR-SGD) method is implemented. The optimum configuration found in simulation is validated on phantom experiments. Results PAT imaging with dual-probe detection and VR-SGD algorithm is found to improve the limited-view problem compared to a single probe and provide comparable performance as full-view geometry in simulation. This configuration is validated in experiments where more complete structure is obtained with reduced artifacts compared with a single array. Conclusions PAT with dual-probe detection and iterative reconstruction is a promising solution to the limited-view problem of linear arrays.

Posted Content
TL;DR: In this article, a multiparametric 3D weighted QUS (3D QUS) imaging system is presented, involving the reconstruction of three QUS parameters: attenuation coefficient estimate (ACE), integrated backscatter coefficient (IBC), and effective scatterer diameter (ESD).
Abstract: Quantitative ultrasound (QUS) offers a non-invasive and objective way to quantify tissue health. We recently presented a spatially adaptive regularization method for reconstruction of a single QUS parameter, limited to a two dimensional region. That proof-of-concept study showed that regularization using homogeneity prior improves the fundamental precision-resolution trade-off in QUS estimation. Based on the weighted regularization scheme, we now present a multiparametric 3D weighted QUS (3D QUS)imaging system, involving the reconstruction of three QUS parameters: attenuation coefficient estimate (ACE), integrated backscatter coefficient (IBC) and effective scatterer diameter (ESD). With the phantom studies, we demonstrate that our proposed method accurately reconstructs QUS parameters, resulting in high reconstruction contrast and therefore improved diagnostic utility. Additionally, the proposed method offers the ability to analyze the spatial distribution of QUS parameters in 3D, which allows for superior tissue characterization. We apply a three-dimensional total variation regularization method for the volumetric QUS reconstruction. The 3D regularization involving N planes results in a high QUS estimation precision, with an improvement of standard deviation over the theoretical rate achievable by compounding N independent realizations. In the in vivo liver study, we demonstrate the advantage of adopting a multiparametric approach over the single parametric counterpart, where a simple quadratic discriminant classifier using feature combination of three QUS parameters was able to attain a perfect classification performance to distinguish between normal and fatty liver cases.

Proceedings Article
24 Mar 2021
TL;DR: Wang et al. as discussed by the authors reformulated the multi-view 3D reconstruction as a sequence-to-sequence prediction problem and proposed a new framework named 3D Volume Transformer (VolT) for such a task.
Abstract: Deep CNN-based methods have so far achieved the state of the art results in multi-view 3D object reconstruction. Despite the considerable progress, the two core modules of these methods - multi-view feature extraction and fusion, are usually investigated separately, and the object relations in different views are rarely explored. In this paper, inspired by the recent great success in self-attention-based Transformer models, we reformulate the multi-view 3D reconstruction as a sequence-to-sequence prediction problem and propose a new framework named 3D Volume Transformer (VolT) for such a task. Unlike previous CNN-based methods using a separate design, we unify the feature extraction and view fusion in a single Transformer network. A natural advantage of our design lies in the exploration of view-to-view relationships using self-attention among multiple unordered inputs. On ShapeNet - a large-scale 3D reconstruction benchmark dataset, our method achieves a new state-of-the-art accuracy in multi-view reconstruction with fewer parameters ($70\%$ less) than other CNN-based methods. Experimental results also suggest the strong scaling capability of our method. Our code will be made publicly available.

Posted Content
TL;DR: Kalia et al. as mentioned in this paper proposed a joint generation and segmentation strategy to learn a segmentation model with better generalization capability to domains that have no labelled data, which leverages the availability of labelled data in a different domain.
Abstract: Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays. Therefore, the topic has been the subject of a number of recent papers in the CAI community. Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data. However, labelled surgical data is of limited availability and is a bottleneck in surgical translation of these methods. In this paper, we demonstrate the limited generalizability of these methods on different datasets, including human robot-assisted surgeries. We then propose a novel joint generation and segmentation strategy to learn a segmentation model with better generalization capability to domains that have no labelled data. The method leverages the availability of labelled data in a different domain. The generator does the domain translation from the labelled domain to the unlabelled domain and simultaneously, the segmentation model learns using the generated data while regularizing the generative model. We compared our method with state-of-the-art methods and showed its generalizability on publicly available datasets and on our own recorded video frames from robot-assisted prostatectomies. Our method shows consistently high mean Dice scores on both labelled and unlabelled domains when data is available only for one of the domains. *M. Kalia and T. Aleef contributed equally to the manuscript

Posted Content
TL;DR: In this article, the prostate is segmented on a challenging dataset of trans-rectal ultrasound (TRUS) images using convolutional neural networks (CNNs) and statistical shape models (SSMs).
Abstract: In this work we propose to segment the prostate on a challenging dataset of trans-rectal ultrasound (TRUS) images using convolutional neural networks (CNNs) and statistical shape models (SSMs). TRUS is commonly used for a number of image-guided interventions on the prostate. Fast and accurate segmentation on the organ in these images is crucial to planning and fusion with other modalities such as magnetic resonance images (MRIs) . However, TRUS has limited soft tissue contrast and signal to noise ratio which makes the task of segmenting the prostate challenging and subject to inter-observer and intra-observer variability. This is especially problematic at the base and apex where the gland boundary is hard to define. In this paper, we aim to tackle this problem by taking advantage of shape priors learnt on an MR dataset which has higher soft tissue contrast allowing the prostate to be contoured more accurately. We use this shape prior in combination with a prostate tissue probability map computed by a CNN for segmentation.

Posted Content
TL;DR: Wang et al. as discussed by the authors reformulated the multi-view 3D reconstruction as a sequence-to-sequence prediction problem and proposed a new framework named 3D Volume Transformer (VolT) for such a task.
Abstract: Deep CNN-based methods have so far achieved the state of the art results in multi-view 3D object reconstruction. Despite the considerable progress, the two core modules of these methods - multi-view feature extraction and fusion, are usually investigated separately, and the object relations in different views are rarely explored. In this paper, inspired by the recent great success in self-attention-based Transformer models, we reformulate the multi-view 3D reconstruction as a sequence-to-sequence prediction problem and propose a new framework named 3D Volume Transformer (VolT) for such a task. Unlike previous CNN-based methods using a separate design, we unify the feature extraction and view fusion in a single Transformer network. A natural advantage of our design lies in the exploration of view-to-view relationships using self-attention among multiple unordered inputs. On ShapeNet - a large-scale 3D reconstruction benchmark dataset, our method achieves a new state-of-the-art accuracy in multi-view reconstruction with fewer parameters ($70\%$ less) than other CNN-based methods. Experimental results also suggest the strong scaling capability of our method. Our code will be made publicly available.

Posted Content
TL;DR: A comprehensive systematic review of the current force sensing research aimed at RAS and, more generally, keyhole endoscopy, in which instruments enter the body through small incisions is provided in this paper.
Abstract: Instrument-tissue interaction forces in Minimally Invasive Surgery (MIS) provide valuable information that can be used to provide haptic perception, monitor tissue trauma, develop training guidelines, and evaluate the skill level of novice and expert surgeons.Force and tactile sensing is lost in many Robot-Assisted Surgery (RAS) systems. Therefore, many researchers have focused on recovering this information through sensing systems and estimation algorithms. This article provides a comprehensive systematic review of the current force sensing research aimed at RAS and, more generally, keyhole endoscopy, in which instruments enter the body through small incisions. Articles published between January 2011 and May 2020 are considered, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines. The literature search resulted in 110 papers on different force estimation algorithms and sensing technologies, sensor design specifications, and fabrication techniques.

Posted Content
TL;DR: In this article, a model-based iterative method to obtain shear modulus images of tissue using magnetic resonance elastography was proposed, which jointly finds the displacement field that best fits multifrequency tissue displacement data and the corresponding shear modulation.
Abstract: We introduce a model-based iterative method to obtain shear modulus images of tissue using magnetic resonance elastography. The method jointly finds the displacement field that best fits multifrequency tissue displacement data and the corresponding shear modulus. The displacement satisfies a viscoelastic wave equation constraint, discretized using the finite element method. Sparsifying regularization terms in both shear modulus and the displacement are used in the cost function minimized for the best fit. The formulated problem is bi-convex. Its solution can be obtained iteratively by using the alternating direction method of multipliers. Sparsifying regularizations and the wave equation constraint filter out sensor noise and compressional waves. Our method does not require bandpass filtering as a preprocessing step and converges fast irrespective of the initialization. We evaluate our new method in multiple in silico and phantom experiments, with comparisons with existing methods, and we show improvements in contrast to noise and signal to noise ratios. Results from an in vivo liver imaging study show elastograms with mean elasticity comparable to other values reported in the literature.

Posted Content
TL;DR: In this paper, a conditional generative adversarial network (GAN) is used to penalize the model on spatial constraints of the seeds, and an optional optimizer based on simulated annealing (SA) algorithm can be used to further fine-tune the plans if necessary (determined by the treating physician).
Abstract: Treatment planning in low-dose-rate prostate brachytherapy (LDR-PB) aims to produce arrangement of implantable radioactive seeds that deliver a minimum prescribed dose to the prostate whilst minimizing toxicity to healthy tissues. There can be multiple seed arrangements that satisfy this dosimetric criterion, not all deemed 'acceptable' for implant from a physician's perspective. This leads to plans that are subjective to the physician's/centre's preference, planning style, and expertise. We propose a method that aims to reduce this variability by training a model to learn from a large pool of successful retrospective LDR-PB data (961 patients) and create consistent plans that mimic the high-quality manual plans. Our model is based on conditional generative adversarial networks that use a novel loss function for penalizing the model on spatial constraints of the seeds. An optional optimizer based on a simulated annealing (SA) algorithm can be used to further fine-tune the plans if necessary (determined by the treating physician). Performance analysis was conducted on 150 test cases demonstrating comparable results to that of the manual prehistorical plans. On average, the clinical target volume covering 100% of the prescribed dose was 98.9% for our method compared to 99.4% for manual plans. Moreover, using our model, the planning time was significantly reduced to an average of 2.5 mins/plan with SA, and less than 3 seconds without SA. Compared to this, manual planning at our centre takes around 20 mins/plan.

Posted Content
TL;DR: In this article, a 6-axis optical force sensor with local signal conditioning and digital electronics was mounted on the proximal shaft of a da Vinci EndoWrist instrument to measure the lateral forces and moments and axial torque applied to the instruments distal end within the desired resolution, accuracy, and range requirements.
Abstract: This paper presents a novel multi-axis force-sensing approach in robotic minimally invasive surgery with no modification to the surgical instrument. Thus, it is adaptable to different surgical instruments. A novel 6-axis optical force sensor, with local signal conditioning and digital electronics, was mounted onto the proximal shaft of a da Vinci EndoWrist instrument. A new cannula design comprising an inner tube and an outer tube was proposed. The inner tube is attached to the cannula interface to the robot base through a compliant leaf spring with adjustable stiffness. It allows bending of the instrument shaft due to the tip forces. The outer tube mechanically filters out the body forces from affecting the instrument bending behavior. A mathematical model of the sensing principle was developed and used for model-based calibration. A data-driven calibration based on a shallow neural network architecture comprising a single 5-nodes hidden layer and a 5x1 output layer is discussed. Extensive testing was conducted to validate that the sensor can successfully measure the lateral forces and moments and the axial torque applied to the instruments distal end within the desired resolution, accuracy, and range requirements.