scispace - formally typeset
Search or ask a question

Showing papers in "Computer methods in biomechanics and biomedical engineering. Imaging & visualization in 2016"


Journal ArticleDOI
TL;DR: The results indicate that when there is an underlying structure in the set of existing solutions, the proposed method can successfully predict the optimal topologies in novel loading configurations and can be used as effective initial conditions for conventional topology optimisation routines, resulting in substantial performance gains.
Abstract: Topology optimisation problems involving structural mechanics are highly dependent on the design constraints and boundary conditions. Thus, even small alterations in such parameters require a new application of the optimisation routine. To address this problem, we examine the use of known solutions for predicting optimal topologies under a new set of design constraints. In this context, we explore the feasibility and performance of a data-driven approach to structural topology optimisation problems. Our approach takes as input a set of images representing optimal 2D topologies, each resulting from a random loading configuration applied to a common boundary support condition. These images represented in a high dimensional feature space are projected into a lower dimensional space using component analysis. Using the resulting components, a mapping between the loading configurations and the optimal topologies is learned. From this mapping, we estimate the optimal topologies for novel loading configurations. ...

70 citations


Journal ArticleDOI
TL;DR: A new method for rib cage 3D reconstruction from biplanar radiographs, using a statistical parametric model approach, will improve developments of rib cage finite element modelling and evaluation of clinical outcomes.
Abstract: Rib cage 3D reconstruction is an important prerequisite for thoracic spine modelling, particularly for studies of the deformed thorax in adolescent idiopathic scoliosis. This study proposes a new method for rib cage 3D reconstruction from biplanar radiographs, using a statistical parametric model approach. Simplified parametric models were defined at the hierarchical levels of rib cage surface, rib midline and rib surface, and applied on a database of 86 trunks. The resulting parameter database served to train statistical models which were used to quickly provide a first estimate of the reconstruction from identifications on both radiographs. This solution was then refined by manual adjustments in order to improve the matching between model and image. Accuracy was assessed by comparison with 29 rib cages from CT scans in terms of geometrical parameter differences and in terms of line-to-line error distance between the rib midlines. Intra and inter-observer reproducibility was determined for 20 scoliotic p...

41 citations


Journal ArticleDOI
TL;DR: This paper compiles and compares well-known techniques mostly used in the smoothing or suppression of speckle noise in ultrasound images, based on spatial filtering, diffusion filtering and wavelet filtering, with 15 qualitative metrics estimation.
Abstract: Over the last three decades, several despeckling filters have been developed by researchers to reduce the speckle noise inherently present in ultrasound B-scan images without losing the diagnostic information This paper compiles and compares well-known techniques mostly used in the smoothing or suppression of speckle noise in ultrasound images A comparison of the methods studied is done based on an experiment, using quality metrics, texture analysis and interpretation of profiles to evaluate their performance and show the benefits each one can contribute to denoising and feature preservation To test the methods, a noise-free image of a kidney is used and then the Field II program simulates a B-mode ultrasound image By this way, the smoothing techniques can be compared using numeric metrics, taking the noise-free image as a reference In this study, a total of 17 different speckle reduction algorithms have been documented based on spatial filtering, diffusion filtering and wavelet filtering, with 15 qu

12 citations


Journal ArticleDOI
TL;DR: The proposed technology provides the finest MRI alignments among all the methods, and brings the end-to-end execution within the real-time constraints imposed by the neurosurgical procedure.
Abstract: We present a parallel adaptive physics-based non-rigid registration framework for aligning pre-operative to intra-operative brain magnetic resonance images (MRI) of patients who have undergone a tumor resection. This framework extends our earlier work on the physics-based methods by using an adaptive, multi-material, parallel finite element biomechanical model to describe the physical deformations of the brain. Our registration technology incorporates fast image-to-mesh convertors for remeshing the brain model in real-time eliminating the poor-quality elements; various linear solvers to accurately estimate the volumetric deformations; efficient block-matching techniques to compensate for the missing/unrealistic matches induced by the tumor resection. Our evaluation is based on six clinical volume MRI data-sets including (i) isotropic and anisotropic image spacings, and (ii) partial and complete tumor resections. We compare our framework with four methods: a rigid and BSpline deformable registration implem...

12 citations


Journal ArticleDOI
TL;DR: A robust and accurate method for the reconstruction of standard 12-lead (S12) system from Frank vectorcardiographic (FV) system using personalised transformation (PT) matrices targeting personalised remote health monitoring applications is proposed.
Abstract: In this article, we have proposed a robust and accurate method for the reconstruction of standard 12-lead (S12) system from Frank vectorcardiographic (FV) system using personalised transformation (PT) matrices targeting personalised remote health monitoring applications. FV system is used in the 3D visualisation of heart and diagnosis and prognosis of many cardiologic disorders including myocardial infarction, Brugada syndrome etc. However, cardiologists are accustomed to S12 system pertaining to its decades-old usage and widespread acceptability and hence, it is generally used as primary ECG acquisition system and state-of-the-art inverse Dower transform (DT) and affine transform (AT) are used to obtain FV system from S12 system. Here, we propose the acquisition of FV system and use PT to reconstruct S12 system from FV system. PhysioNet's PTB database after wavelet-based preprocessing to remove baseline wandering and noise has been used in this investigation. The personalised coefficients have been obtai...

10 citations


Journal ArticleDOI
TL;DR: The results indicate the difference between the role of endocardial and epicardial principal strain lines, at the systolic peak, and set the bases for possible future investigations aimed to analyse the onset of specific cardiac diseases through noninvasive analysis of LV fibre architecture.
Abstract: We present and discuss a method to infer noninvasively information on the fibre architecture in real human left ventricle walls. The method post-processes the echocardiographic data acquired by three-dimensional speckle tracking echocardiography through a MatLab-based protocol, already presented, discussed and validated. Our results indicate the difference between the role of endocardial and epicardial principal strain lines, at the systolic peak, and set the bases for possible future investigations aimed to analyse the onset of specific cardiac diseases through noninvasive analysis of LV fibre architecture. Moreover, we set a rational statistical analysis to investigate the reliability of the proposed method.

10 citations


Journal ArticleDOI
TL;DR: The meshfree reproducing kernel particle method for 3D image-based modelling of skeletal muscles allows for construction of simulation model based on pixel data obtained from medical images and allows for representation of material heterogeneity with smooth transition.
Abstract: This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) in conjunction with a stabilized conforming nodal integration for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The model consists of different materials and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) is input at each pixel point. The reproducing kernel (RK) approximation also allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation using Magnetic Resonance Images (MRI) and DTI formulated under a modified functional has been integrated into RKPM framework. The use of proposed methods for modeling the human lower leg is demonstrated.

10 citations


Journal ArticleDOI
TL;DR: The use of image morphometry and pattern recognition techniques for subtyping leukaemic lymphoblasts as per French–American–British classification is investigated and it is observed that the classification rate is improved with the use of multiple classifier ensemble.
Abstract: Leukaemias are neoplastic proliferations of haemopoietic cells which affect both children and adults and remain one of the leading causes of death around the world. Early diagnosis and classification of such malignant disorders are necessary, and it has always been a challenge in the field of haematology and laboratory medicine. Accurate and authentic diagnosis of leukaemia is essential for the confirmation of the disease, prognostic classification and effective treatment planning. Visual microscopic examination of blood slides is considered as an indispensable diagnostic technique for the screening and classification of leukaemia across continents. However, manual investigation is often slow and limited by subjective assimilation and reduced diagnostic precision. In this study, we investigate the use of image morphometry and pattern recognition techniques for subtyping leukaemic lymphoblasts as per French–American–British classification. Reliable classification results were obtained using the robust segm...

9 citations


Journal ArticleDOI
TL;DR: This paper proposes an automatic method for constructing an active shape model (ASM) to segment the complete cardiac left ventricle in 3D ultrasound (3DUS) images, which avoids costly manual landmarking.
Abstract: In this paper, we propose an automatic method for constructing an active shape model (ASM) to segment the complete cardiac left ventricle in 3D ultrasound (3DUS) images, which avoids costly manual landmarking. The automatic construction of the ASM has already been addressed in the literature; however, the direct application of these methods to 3DUS is hampered by a high level of noise and artefacts. Therefore, we propose to construct the ASM by fusing the multidetector computed tomography data, to learn the shape, with the artificially generated 3DUS, in order to learn the neighbourhood of the boundaries. Our artificial images were generated by two approaches: a faster one that does not take into account the geometry of the transducer, and a more comprehensive one, implemented in Field II toolbox. The segmentation accuracy of our ASM was evaluated on 20 patients with left-ventricular asynchrony, demonstrating plausibility of the approach.

8 citations


Journal ArticleDOI
TL;DR: A novel behavioural comparison strategy specifically oriented to accuracy assessment in MRI glial tumour segmentation studies, designed to merge individual labels and produce a common segmentation based on fuzzy connectedness principles.
Abstract: In this work, we propose a novel behavioural comparison strategy specifically oriented to accuracy assessment in MRI glial tumour segmentation studies. A salient aspect of the proposed strategy is the use of the fuzzy set framework in modelling visual inspection and interpretation processes. In particular, a reference estimation strategy based on fuzzy connectedness principles is designed to merge individual labels and produce a common segmentation. The estimation is based exclusively on highly reliable partial information provided by experts. Interaction is then drastically limited compared with a complete manual tracing, leaving the estimation of the complete segmentation to the fuzzy connectedness method. A set of experiments was conceived and conducted to evaluate the contribution of the solutions proposed in the process of truth label collection and reference data estimation. A comparison analysis was also developed to see whether our method could constitute a worthy alternative to well-known and sta...

8 citations


Journal ArticleDOI
TL;DR: The HEWCVT algorithm can not only overcome the sensitivity to the seed point initialisation and noise, but also improve the accuracy and stability of clustering results, as verified in several types of images.
Abstract: In this paper, we extend the basic edge-weighted centroidal Voronoi tessellation (EWCVT) for image segmentation to a new advanced model, namely harmonic edge-weighted centroidal Voronoi tessellation (HEWCVT). This extended model introduces a harmonic form of clustering energy by combining the image intensity with cluster boundary information. Improving upon the classic centroidal Voronoi tessellation (CVT) and EWCVT methods, the HEWCVT algorithm can not only overcome the sensitivity to the seed point initialisation and noise, but also improve the accuracy and stability of clustering results, as verified in several types of images. We then present an adaptive superpixel generation algorithm based on HEWCVT. First, an innovative initial seed sampling method based on quadtree decomposition is introduced, and the image is divided into small adaptive segments according to a density function. Then, the local HEWCVT algorithm is applied to generate adaptive superpixels. The presented algorithm is capable of gene...

Journal ArticleDOI
TL;DR: The DM-POD workflow is quantitatively shown to provide more anatomically consistent representations of the RV, while in general, the features produced by the two workflows are shown to be distinctly different.
Abstract: Two contemporary statistical shape analysis workflows are presented and compared with respect to analysis of the human right ventricle (RV). The methods examined include an approach that directly applies proper orthogonal decomposition to harmonically mapped surfaces (DM-POD) and an approach that expands the harmonically mapped surfaces onto spherical harmonic functions prior to further analysis (SPHARM). The structure of both workflows is elaborated upon and compared, particularly regarding the details of several key sub-steps, including the shape parameterisation, alignment and statistical decomposition. The performance is evaluated for the components of each framework at the various analysis stages, as well as for the output of the complete workflows in terms of the potential to assess right ventricular function through application to a set of RV endocardial surfaces with varying levels of pulmonary hypertension. In addition, DM-POD and SPHARM are examined with respect to different methods of utilising...

Journal ArticleDOI
TL;DR: The experimental results show that the quality of the edge detection process improves when the proposed algorithm in the smoothing AD is used instead of the traditional techniques.
Abstract: The aim of this contribution is to make an efficient smoothing algorithm that preserves edges and provides valuable information for any segmentation process. The non-linear anisotropic diffusion (AD) model of Perona–Malik is considered to enhance the edges in the process of diffusion through a variable diffusion coefficient. However, the diffusion coefficient is very sensitive to the so-called contrast or gradient threshold parameter. This article proposes a novel methodology for the estimation of this contrast parameter based on a partition of the image using the K-means algorithm and a least-square fit to approximate the diffusion coefficient (KMLS). The experimental results show that the quality of the edge detection process improves when the proposed algorithm in the smoothing AD is used instead of the traditional techniques. The comparison is performed using two objective edge detection performance measures, the so-called Pratt's figure of merit and the symmetric average distance. Both measures show ...

Journal ArticleDOI
TL;DR: The obtained results confirm that T2-w signal intensity, together with other imaging biomarkers, may represent a new non-invasive approach to assess cancer aggressiveness, potentially helping to plan personalised treatments, and thus dramatically limiting over-diagnosis and over-treatment risks, and reducing the costs for the national healthcare system.
Abstract: Prostate cancer (PCa) is the most common solid neoplasm in males and a major cause of cancer-related death. Behaviour of PCa is dichotomous, as patients may either have an indolent clinical course or rapidly progress towards metastatic disease. Unfortunately, biopsy Gleason score (GS) may fail to predict cancer aggressiveness; tumour heterogeneity and inaccurate sampling during biopsy are major causes of underestimation. As a consequence, this frequently results in over-treatment, i.e. low-risk patients overcautiously undergo radical prostatectomy or radiotherapy, frequently with devastating side-effects. Some patients with PCa could be offered a more conservative approach if it were possible to predict patient risk confidently, especially in subjects lying in the grey zone of intermediate risk (i.e. GS = 7), which are in the majority. Recent studies have demonstrated that magnetic resonance (MR) imaging may help improve risk stratification in patients with PCa, providing imaging biomarkers of cancer aggr...

Journal ArticleDOI
TL;DR: A finite element model that is both simple and fast while preserving the accuracy of the analysis is proposed that can be generated and combined in a statistical discriminant analysis in order to study the mechanical behaviour of selected trabecular bone samples.
Abstract: The form primitives of trabecular bone can be modelled as a complex structure of rods and plates. The classification of these primitives helps in simulating the physical or mechanical properties which are usually determined not only by the porosity of the object, but also by the arrangement of structure primitives in the 3D space. Based on the classified trabecular bone primitives, we propose in this work a finite element model that is both simple and fast while preserving the accuracy of the analysis. Rods are modelled as beams with circular cross sections. For the plate-like primitives, a triangulation method is used to characterise the behaviour of the corresponding shell elements. Using the proposed method, multiple features can be generated and combined in a statistical discriminant analysis in order to study the mechanical behaviour of selected trabecular bone samples. A clinical study was conducted on two populations of arthritic and osteoporotic bone samples. The results show the ability of our beam/shell model to discriminate the two populations.

Journal ArticleDOI
TL;DR: A probabilistic model based on physiological assumption that time–activity curves (TACs) arise as a convolution of an IF and tissue-specific kernels and their related spatial distributions (images) is proposed.
Abstract: Image-based definition of input function (IF) and organ function is a prerequisite for kinetic analysis of dynamic scintigraphy or positron emission tomography. This task is typically done manually by a human operator and suffers from low accuracy and reproducibility. We propose a probabilistic model based on physiological assumption that time–activity curves (TACs) arise as a convolution of an IF and tissue-specific kernels. The model is solved via the Variational Bayes estimation procedure and provides estimates of the IF, tissue-specific TACs and their related spatial distributions (images) as its results. The algorithm was tested with data of dynamic renal scintigraphy. The method was applied to the problem of differential renal function estimation and the IF estimation and the results are compared with competing techniques on data-sets with 99 and 19 patients. The MATLAB implementation of the algorithm is available for download.

Journal ArticleDOI
TL;DR: This paper presents a reconstruction method in which a Gaussian scale mixture model constraint in the wavelet domain is combined with a total variation constraint for use as a regularisation prior for faster acquisition of magnetic resonance imaging data.
Abstract: In magnetic resonance imaging, the long acquisition time required to capture k-space data according to the Nyquist sampling rule is a major limitation. Methods for reducing the scan time for these types of imaging procedures have attracted considerable research interest. Compressed sensing approaches have recently been applied to allow faster acquisition by undersampling the k-space data. However, random undersampling introduces noise-like artefacts. To address this issue, a number of nonlinear reconstruction methods have been proposed that use norm regularisation with a sparsifying transform. In this paper, we present a reconstruction method in which a Gaussian scale mixture model constraint in the wavelet domain is combined with a total variation constraint for use as a regularisation prior. A series of experimental evaluations are conducted to validate our method using synthetic and real multi-slice MRI data for the purposes of faster acquisition. Our results show that the volume reconstructed by our m...

Journal ArticleDOI
TL;DR: The spin-image algorithm was chosen for the registration of partial bone geometry in 4DCT of the wrist using complete and partial capitate meshes generated by cropping complete meshes to assess relative accuracy.
Abstract: In image-based biomechanical analyses, registration transformations are the data of interest. In dynamic 4DCT imaging the capitate is often partially imaged. While alignment of incomplete objects poses a significant registration challenge, the established spin-image surface-matching algorithm can be utilized to align two surfaces representing disparate but overlapping portions of an object. For this reason the spin-image algorithm was chosen for the registration of partial bone geometry in 4DCT of the wrist. Registrations were performed on eleven 4DCT datasets using complete and partial capitate meshes generated by cropping complete meshes. Relative accuracy was assessed as the difference between partial- and complete-geometry registrations. Accurate registration of partial capitates geometry was achieved with 55% of the proximal capitate geometry on average, and in some cases as little as 35%. Requisite geometry depends on feature salience and imaging resolution, however the spin-image algorithm should be considered a valuable tool for biomechanists and image analysts.

Journal ArticleDOI
TL;DR: The keyhole model, tested in this study, proved to be a promising technique to automatically track individual RBCs in microchannels and measure the DI along the microchannel.
Abstract: This study aimed to assess the motion and its deformation index (DI) of red blood cells (RBCs) flowing through a microchannel with a microstenosis using an image analysis-based method. For this purpose, a microchannel having a smooth contraction was used and the images were captured by a standard high-speed microscopy system. An automatic image-processing and analysing method was developed in a MATLAB environment to not only track the motion of RBCs but also measure the DI along the microchannel. The keyhole model, tested in this study, proved to be a promising technique to automatically track individual RBCs in microchannels.

Journal ArticleDOI
TL;DR: Visual inspection of both synthetic and real CT images, as well as computation of similarity indexes, suggests that the strategy to perform metal artefact reduction (MAR) that relies on the total variation-H− 1 inpainting, a variational approach based on a fourth-order total variation (TV) flow, outperforms the others considered here.
Abstract: Permanent metallic implants, such as dental fillings and cardiac devices, generate streaks-like artefacts in computed tomography (CT) images. In this article, we propose a strategy to perform metal artefact reduction (MAR) that relies on the total variation-H− 1 inpainting, a variational approach based on a fourth-order total variation (TV) flow. This approach has never been used to perform MAR, although it has been profitably employed in other branches of image processing. A systematic evaluation of the performance is carried out. Comparisons are made with the results obtained using classical linear interpolation and two other partial differential equation-based approaches relying, respectively, on the Fourier's heat equation and on a second order TV flow. Visual inspection of both synthetic and real CT images, as well as computation of similarity indexes, suggests that our strategy for MAR outperforms the others considered here, as it provides best image restoration, highest similarity indexes and for b...

Journal ArticleDOI
TL;DR: A powerful combination of software and hardware methods which deliver a much faster response in contrast with existing solutions coming from conventional programming on multicore CPU platforms, thus offering a high performance alternative in clinical practice for real-time imaging.
Abstract: We investigate the use of Legendre moments as biomarkers for an efficient and accurate classification of bone tissue on images coming from stem cell regeneration studies. Legendre moments are analysed from three different perspectives: (1) their discriminant properties in a wide set of preselected vectors of features based on our clinical and computational experience, providing solutions whose accuracy exceeds 90%; (2) the amount of information to be retained when using principal component analysis to reduce the dimensionality of the problem to either 2, 3, 4, 5 or 6 dimensions and (3) the use of the -k-feature set problem to identify a k = 4 number of features which are more relevant to our analysis from a combinatorial optimisation approach. These techniques are compared in terms of computational complexity and classification accuracy to assess the strengths and limitations of the use of Legendre moments. The second contribution of this work goes to reduce the computational complexity by using graphics ...

Journal ArticleDOI
TL;DR: An adaptive filter with Brown–Forsythe statistical criterion has been implemented to avoid excessive filtering near the edges of the denoising procedure and showed improved image contrast with edge preservation ability.
Abstract: MR image offers better soft tissue contrast which makes it advantageous for cartilage diagnosis for patients suffering from Osteoarthritis. But magnitude MR images are susceptible to Rician noise w...

Journal ArticleDOI
TL;DR: From results, it is observed that the performance of LER is superior in terms of accuracy and robustness in the presence of weak boundaries and strong noise.
Abstract: This paper reports three techniques for automated intima–media thickness measurement in the B-mode common carotid artery ultrasound images. These are the gradient based (GB), localising region-based active contour (LAC) and level-set evolution without re-initialisation (LER). Authors have reported results with quantitative and qualitative comparisons. From results, it is observed that the performance of LER is superior in terms of accuracy and robustness in the presence of weak boundaries and strong noise. The correlation coefficients between automated measurements and manually obtained reference values were 0.63, 0.76 and 0.98 for GB, LAC and LER techniques, respectively (N = 66). The figure-of-merit of LER was 99.59%, better than GB (88.86%) and LAC (98.52%).

Journal ArticleDOI
TL;DR: Good spectral matching efficacy for pairs of geometries is demonstrated, which differ only in terms of affine transformations relative to each other, but additionally illustrate the method's potential for biomedical applications such as shape-based patient information retrieval.
Abstract: The eigen-modes of the Laplace–Beltrami operator (LBO) applied to triangulated three-dimensional (3D) surface geometries have been shown to be effective parametric representations of overall shape and structural detail, which may constitute promising feature spaces for statistical shape comparison. The objective of this study is to explore a Laplace spectral-matching approach to compare pairs of similar 3D surface geometries using their respective Laplace spectral representations, obtained from the LBO or the graph Laplacian of the distributed points over each surface manifold. We demonstrate the efficacy of a greedy algorithm for appropriate selection of a set of Laplacian eigen-mode shape descriptors, while resolving their respective sign ambiguities, for optimal shape matching. We test our algorithm on three pairs of experimental shapes, as well as three sets of clinically relevant test geometries, i.e. surface models of two similar patient-specific models of the left atrial appendage of the atrial cha...

Journal ArticleDOI
TL;DR: The promising results indicate that image segmentation by the proposed approach gives good results and it can be used as an efficient method to validate other existing approaches to flow measurement instrumentation.
Abstract: In this paper, an extension of digital images segmentation approach based on active contours, or snakes approach, and an extension of shape model are unified in order to develop a free representation for level set method with a priori knowledge. The proposed method improves the average freely previously trained format, using the method of contour active for Level Set. In this case, there is no restriction to interface evolution as occurs in other approaches that combine active contours and prior knowledge. This approach is used for the correct identification of gas bubble shape in gas–liquid two-phase flow. The main objective of this work is to provide a validation system to support the development of other flow measurements tools based on widely used instrumentation. The promising results indicate that image segmentation by the proposed approach gives good results and it can be used as an efficient method to validate other existing approaches to flow measurement instrumentation.

Journal ArticleDOI
TL;DR: This study explores the potential ability of a 3D-skeleton coupled with a statistical tensor analysis to locally describe the trabecular structure for binary images and proposes a strategy using inertia tensors based on the skeleton ensuring the feasibility of the entire process.
Abstract: The trabecular bone is a complex random network of interconnected rods and plates. Its trabecular structure is constantly remodelling to ensure a maintenance function. A simulated bone remodelling process was discussed in a previous study based on a BMU germ-grain model where type and orientation of local structure related to mechanical stress were not considered. In this study, we explore the potential ability of a 3D-skeleton coupled with a statistical tensor analysis to locally describe the trabecular structure for binary images. In order to add new constraints for BMU validation and BMU-shape characterisation in the simulator, we propose a strategy using inertia tensors based on the skeleton ensuring the feasibility of the entire process.

Journal ArticleDOI
TL;DR: The manufactured human model fat phantom is used to measure the radiological dose and to propose proper examination conditions and the body fat area value of the research phantom was measured for each examination condition by scanning with various examination conditions.
Abstract: This study used the manufactured human model fat phantom to measure the radiological dose and to propose proper examination conditions. Here, 64-multi-detector computed tomography was used and scanning was performed with various tube voltages of 80, 100 and 120 kVp with the human body phantom. The dose-length product (DLP) value, computed tomography (CT) value and the body fat area value of the research phantom were measured after scanning. The DLP value for each examination condition by scanning with changes in the examination conditions was 120 kVp; the highest value was 182.8 mGy cm with 250 mA. The highest value was − 114.4 for 120 kVp and 250 mA. The body fat area value was measured for each examination condition by scanning with various examination conditions; the highest value was 179 cm2 for 120 kVp and 250 mA. However, the body fat area values for 100 and 120 kVp did not show a large difference. The data may be valuable for the diagnosis without changing the abdominal body fat and decrease in the...

Journal ArticleDOI
TL;DR: The effect of low velocity impact on the osteoporotic hip in ageing people will be studied in LSDYNA and the critical impulse loading of the hip will be the benchmark to improve the design of safety instruments and consequently the well-being of elderly people.
Abstract: Hip fractures due to sideways falls are a worldwide health problem, especially amongst elderly people. The force experienced by the proximal femur during a fall, and therefore hip fracture, is significantly dependent on density, thickness and stiffness of the body during impact. The process of fracture and healing can only be understood in terms of the structure and composition of the bone and also its mechanical properties. Bone fracture analysis investigates the prediction of various failure mechanisms under different loading conditions. An accurate explicit finite element method will assist scientists and researchers to predict the impact damage response of bone structures. In this paper, the effect of low velocity impact on the osteoporotic hip in ageing people will be studied in LSDYNA. The first part aims to create a three-dimensional (3D) reconstruction and registration of semi-transparent computed tomography scan image data using Simpleware software. In the second part, the effect of cortical thic...

Journal ArticleDOI
TL;DR: The identification of fractured bone from computed tomographic (CT) images is a helpful task in medical visualisation and simulation and the utilisation of models reconstructed from CT images of patients allows customisation of the simulation, because the result of the segmentation can be used to perform a reconstruction that provides a 3D model of the patient anatomy.
Abstract: The identification of fractured bone from computed tomographic (CT) images is a helpful task in medical visualisation and simulation. In many cases, specialists need to manually revise 2D and 3D CT images and detect bone fragments in order to check a fracture. The automation of this process would allow them to save time. In visualisation, it allows the reduction of image noise and the removal of undesirable parts. In simulation, the utilisation of models reconstructed from CT images of patients allows customisation of the simulation, because the result of the segmentation can be used to perform a reconstruction that provides a 3D model of the patient anatomy. In this paper, the main issues to be considered in order to identify both healthy and fractured bones are described. The identification of fractured bone requires not only to segment bone tissue but also to label bone fragments. Moreover, some fragments can appear together after the segmentation process, hence additional processing can be required. C...

Journal ArticleDOI
TL;DR: DTI, T1 and T2 in detecting changes in the nucleus pulposus of the intervertebral disc (IVD) during enzyme digestion and highlight the effect of hydration to detect tissue-level changes when hydration is present are investigated.
Abstract: Given that diffusion tensor imaging (DTI) may be able to provide additional information beyond T1 and T2 relaxation times regarding tissue anisotropy and microstructure, the aim of this study was to investigate DTI, T1 and T2 in detecting changes in the nucleus pulposus of the intervertebral disc (IVD) during enzyme digestion and highlight the effect of hydration. Fifty-nine bovine caudal discs were separated into in situ, hydrated and trypsin-digested groups, and subjected to a multi-parametric magnetic resonance imaging (MRI) acquisition followed by confined compression test and biochemical assays. The effects of trypsin digestion versus hydration and of treatment versus duration on the MRI parameters were quantified by ANOVAs. T1 and T2 decreased progressively between the in situ and hydrated groups, increased progressively between the hydrated and digested groups, and did not change between the in situ and digested groups. Fractional anisotropy (FA) increased in the digested groups. The treatment had ...