scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Biomedical Engineering and Technology in 2016"


Journal ArticleDOI
TL;DR: This paper summarises and compares various techniques that are implemented for the classification of medical diabetes diagnosis on various datasets and concludes that on the basis of their issues a new and efficient technique for the Classification of diabetes patients can be implemented.
Abstract: Classification is an efficient and most widely used technique in various applications, such as medical diagnosis of diabetes patients. There are various techniques implemented for the classification of diabetes patients, such as using supervised learning approach of Support Vector Machine (SVM). This technique provides not only the high accuracy of classification but also the high true positive rate when applied on some popular diabetes datasets, such as Pima Indian Diabetes Dataset. This paper summarises and compares various techniques that are implemented for the classification of medical diabetes diagnosis on various datasets. The techniques are analysed and compared on the basis of their advantages, issues, classification accuracy. So that on the basis of their issues a new and efficient technique for the classification of diabetes patients can be implemented.

30 citations


Journal ArticleDOI
TL;DR: The result shows that Kaiser window-based FIR filters is better at removing power-line noise from EEG signal.
Abstract: Small amplitude (μV) of the Electroencephalography (EEG) signal is contaminated by various artefacts in a recorded signal and changes the originality of the signal. The most common disturbance among them is power-line frequency noise of 50 Hz. This makes clinical analysis and information retrieval difficult. It is necessary to remove all such disturbances in EEG signals for proper diagnosis. In this study, performance analysis of Finite Impulse Response (FIR) filter based on various windows and Infinite Impulse Response (IIR) filters for noise reduction from EEG signals have been done. Digital FIR and IIR filter of 100th order applied to signal epochs were studied and performance analysis was done by calculating the fast Fourier transform and signal-to-noise ratio. The result shows that Kaiser window-based FIR filters is better at removing power-line noise from EEG signal.

17 citations


Journal ArticleDOI
TL;DR: An extension of matched filter based on the second derivative of Gaussian (SDOG-MF) is proposed which has higher TPR, FPR, and accuracy as compared with other available retinal blood vessel segmentation approaches in literature and is better for the segmentation of pathological retinal images.
Abstract: An accurate retinal blood vessel segmentation is a prominent task in computer aided diagnosis of various retinal pathology such as hypertension, diabetes, glaucoma, etc. The matched filter based retinal blood vessel segmentation approaches are simple yet effective. However, matched filter based approach detect both vessels and non-vessel edges. Hence, this also leads to false vessels i.e. non-vessels detection. To overcome the problem of detecting the non-vessel edges, here we propose an extension of matched filter based on the second derivative of Gaussian (SDOG-MF). The proposed approach is simple and effective for the segmentation of thin as well as thick retinal blood vessels. The experimental results obtained for both DRIVE and STARE databases confirms that the proposed method has higher TPR, FPR, and accuracy as compared with other available retinal blood vessel segmentation approaches in literature. Further, the performance of the proposed method is also better for the segmentation of pathological retinal images.

11 citations


Journal ArticleDOI
TL;DR: A novel approach has been proposed in this paper to detect Melanoma from dermoscopic images by using K-nearest neighbour, support vector machine, random forest, and Naive Bayes classifier.
Abstract: Melanoma causes majority of deaths related to skin cancer if not detected and treated at an early stage. It is considered as one of the dangerous types of cancer, since it quickly spreads to other parts. A novel approach has been proposed in this paper to detect Melanoma from dermoscopic images. Pre-processing is done to remove hair and noise in the image. Initial segmentation is carried out with watershed transform. This is followed by Maximal Similarity Region Merging process. After pre-processing and segmentation, wavelet-based energy features are extracted using daubechies (DB3), and reverse biorthogonal (RBIO3.3, RBIO3.5, and RBIO3.7) wavelet filters. 12 features are extracted using four wavelet filters. Using the Gain Ratio feature selection method, the most discriminative six features are selected for the classification purpose. Then classification has been done by using K-nearest neighbour, support vector machine, random forest, and Naive Bayes classifier. The highest sensitivity of 97.5% is achieved in the case of support vector machine.

9 citations


Journal Article
TL;DR: The proposed registration algorithm using FCM and SURF is faster and robust against different image transformations like standard SIFT, other recent fuzzy and neural-based methods quantitatively.
Abstract: An approach to medical image registration using Fuzzy c-Means (FCM) clustering segmentation and Speeded-Up Robust Feature (SURF) detector is presented. This approach uses FCM to obtain reference- and floating-segmented images. Volume control points of these segmented images determine the quality of image registration. Based on these volume control points, features are extracted from reference and floating images using SURF and then matched to perform image registration. The proposed registration algorithm using FCM and SURF is faster and robust against different image transformations like standard SIFT, other recent fuzzy and neural-based methods quantitatively. Simulations for FCM clustering using SURF based on a multi-resolution approach using the image of the same size but with different scales are also shown here.

8 citations


Journal ArticleDOI
TL;DR: A simple thresholding technique based on fuzzy logic and Shannon's entropy function to segment the tumour from liver CT images, which provides better results than the classical segmentation methods.
Abstract: Liver tumour segmentation from abdominal CT image is a challenging task in biomedical image processing. Complex segmentation methods may provide good segmentation results but are difficult for implementing in clinical environment. Hence, in this paper, we propose a simple thresholding technique based on fuzzy logic and Shannon's entropy function to segment the tumour from liver CT images. Tumour dimensions are then measured subsequently. The proposed method provides better results than the classical segmentation methods.

8 citations


Journal ArticleDOI
TL;DR: The proposed work is done in order to develop an optimised Brain-Computer Interface (BCI) system (speller) for people with severe motor impairments using SSVEP (Steady-State Visual Evoked Potentials), and the optimisation of speller is divided into three domains.
Abstract: The proposed work is done in order to develop an optimised Brain-Computer Interface (BCI) system (speller) for people with severe motor impairments using SSVEP (Steady-State Visual Evoked Potentials). To make the system fast yet error-free, the optimisation of speller is divided into three domains: one is the design of smart encoding method for the selection of appeared characters on interface, second one is the optimal frequency choice and the last one is design of optimal feature classification algorithm. Three classification methods: threshold method, Artificial Neural Network (ANN) and Support Vector Machine (SVM) are evaluated. An optimal user window is also carefully selected after many trails in order to maintain a decent communication rate. The optimised BCI system provides an average accuracy of 96% with character per minute (CPM) of 13 ± 2. Speller performs almost similar with new users too because inter-subject variability is tackle by SVM classifier.

8 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed image fusion technique performs better than the traditional techniques in preserving salient features in the fused image without producing distortion.
Abstract: Multimodal image fusion plays a pivotal role in medical field by fusing the complementary information of different modalities like CT and MRI into a single image. Widely used transform domain and recently proposed guided filter-based spatial domain image fusion techniques are limited by contrast reduction and halo artefacts. In this paper, an image fusion scheme based on the guided filtered multi-scale decomposition is proposed. First, the source images are decomposed into a base layer and series of detail layers by using guided filter. Then, different fusion rules are employed for fusing the base layer and detail layers. Simulation results show that the proposed fusion technique performs better than the traditional techniques in preserving salient features in the fused image without producing distortion.

7 citations


Journal ArticleDOI
TL;DR: A simple contrast enhancement algorithm in the Sequency based Mapped Real Transform (SMRT) domain is presented that is quantitatively assessed using second derivative like measurement (SDME) and Image Enhancement Metric (IEM).
Abstract: The task of medical image enhancement is to accentuate image features that are clinically relevant and difficult to visualise under normal viewing conditions. Enhancement can be accomplished by increasing image contrast. This paper presents a simple contrast enhancement algorithm in the Sequency based Mapped Real Transform (SMRT) domain. Brightness and contrast of the image are modified by varying DC and AC SMRT coefficients separately. DC SMRT coefficient is changed to bring the image mean to the mid histogram range. Nonlinear mapping functions are used to modify AC SMRT coefficients so as to improve the contrast. Enhancement is quantitatively assessed using second derivative like measurement (SDME) and Image Enhancement Metric (IEM).

6 citations


Journal ArticleDOI
TL;DR: It is observed that the ANC filter constructed using these evolutionary algorithms achieves significant improvement in fidelity parameters such as SNR, MSE, ME and correlation factor when compared with other reported techniques in literature.
Abstract: In this paper, the design of Adaptive Noise Canceller (ANC) filter using evolutionary algorithms such as Particle Swarm Optimisation (PSO), Modified PSO (MPSO) and Artificial Bee Colony (ABC) algorithms is presented. The performance of proposed ANC filter is tested on a corrupted ECG signal. Based on simulation results, it is observed that the ANC filter constructed using these evolutionary algorithms achieves significant improvement in fidelity parameters such as SNR, MSE, ME and correlation factor when compared with other reported techniques in literature. ANC filter based on ABC with scaling factor provides 78% improvement in output SNR, 76% and 87% reduction in MSE and ME, respectively, as compared to ANC filter based on PSO. Further, ANC filter designed using ABC technique enhances the correlation between output and pure ECG signal.

6 citations


Journal ArticleDOI
TL;DR: Results with reduced RMSE value and the average correlation coefficient indicate the successful removal of power line interference and baseline wander noise by the proposed ECG denoising system.
Abstract: The electrocardiogram (ECG) signal is an extensively used biomedical signal for diagnosis of heart diseases. However, the quality of ECG signal is deteriorated by several noises during its acquisition. The two dominant and recurring noises are power line interference and baseline wander noise and they have to be removed for better clinical evaluation. This paper proposes a new ECG denoising system using a combination of Empirical Mode Decomposition (EMD) algorithm and FFT-based frequency analysis. The proposed ECG denoising system is first simulated and validated using MATLAB. Then, it is implemented in TMS320C6713 DSP processor using Code Composer Studio (CCS). The proposed system is tested with the standard ECG signals obtained from Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. The root mean square error (RMSE) and correlation coefficient are used as the evaluation measure to compare the performance of the proposed method with an existing denoising system. The experimental results with reduced RMSE value and the average correlation coefficient of 0.9889 between the original ECG and the denoised one indicate the successful removal of power line interference and baseline wander noise by the proposed ECG denoising system.

Journal ArticleDOI
TL;DR: In this article, a modified eddy interaction model with a generalised near-wall correction factor is presented that more accurately simulates the particle trajectories and subsequent deposition phenomena which are especially affected by nearwall velocity fluctuations.
Abstract: Using the open-source software OpenFOAM as the solver, airflow and microsphere transport have been simulated in a patient-specific lung-airway model. A suitable transitional turbulence model was validated and implemented to accurately simulate airflow fields, as the laryngeal jet occurring in the throat region may induce turbulence immediately downstream. Furthermore, a modified eddy interaction model with a generalised near-wall correction factor is presented that more accurately simulates the particle trajectories and subsequent deposition phenomena which are especially affected by near-wall velocity fluctuations. Particle depositions in the realistic lung-airway configuration are compared with those in an idealised upper airway model. The results indicate that for microsphere deposition in turbulent airflow regions, selection of an appropriate near-wall correction factor can reduce the problem of subject variability for different lung-airway configurations. Open-source solvers for lung-aerosol dynamics simulations, such as OpenFOAM, are predictive tools which are basically cost-free, flexible, largely user-friendly, and portable.

Journal Article
TL;DR: The results of comparative study using various image quality metrics show that wavelet filter with first level decomposition and eliminated HH band [Wav(HH/1)] outperformed others and homomorphic filter with waveletfilter function Wav( HH/1) outperformedOthers in visual assessment carried out by two experts.
Abstract: Spatial domain methods for reducing speckle noise are more efficient but their performance is highly sensitive to size, shape of window and threshold used during filtering process. Hence frequency domain filtering is more appropriate if suitable and straightforward kernel or threshold cannot be found in the spatial domain. Unfortunately, they result in over smoothening, loss of edges and sharp details. In this paper various frequency domain techniques for the suppression of speckle noise are compiled and investigated. Further, they were evaluated using suitable image quality metrics and visual perception evaluation by two experts. Experiments were conducted on 37 breast ultrasound images. The results of comparative study using various image quality metrics show that wavelet filter with first level decomposition and eliminated HH band [Wav(HH/1)] outperformed others. On the other hand, homomorphic filter with wavelet filter function Wav(HH/1) outperformed others in visual assessment carried out by two experts.

Journal ArticleDOI
TL;DR: Three EEG analysis methods, complexity, variability and spectral measures for classification of schizophrenic and normal participants are compared and the suggested features have a good sensitivity for detection of characteristic features of schizophrenia disorder.
Abstract: Symptoms, signs and disease progression are the mainstay of psychiatric disorders diagnosis but defining a biomarker would be a more accurate way for their diagnosis in future. Electroencephalogram (EEG) could be useful in identifying specific biomarkers for diagnosis severe psychiatric disorders. This study is aimed to compare three EEG analysis methods, complexity, variability and spectral measures for classification of schizophrenic and normal participants. Fifteen schizophrenic and 18 age-matched normal subjects participated in this study. For each case, 20 channels of EEG are recorded. The extracted features include two spectral measures such as spectral entropy (SpEn) and Reyni's entropy (ReEn), two complexity measures such as approximate entropy (ApEn) and Lempel-Ziv complexity (LZC) and a variability measure such as central tendency measure (CTM). Finally, k-nearest neighbour (k-NN) is used to classify two mentioned groups. Our results show the classification accuracy of 94% using leave-one (participant)-out cross validation, which improves the previous results in this manner and also simplifies the method. The result indicates the suggested features have a good sensitivity for detection of characteristic features of schizophrenia disorder.

Journal ArticleDOI
TL;DR: A novel system capable of measuring and analysing the signal coming from six different vital signals, based on a portable remote device, monitored via a mobile application running on an android platform, and connected to the patient via its different sensors.
Abstract: This paper presents a novel system capable of measuring and analysing the signal coming from six different vital signals. The system is based on a portable remote device, monitored via a mobile application running on an android platform, and connected to the patient via its different sensors. The results are displayed on the patient phone and sent remotely to his doctor's phone. The doctor can read any of the vital signs anytime by enabling the sensor connected to the patient. This system can also help the doctor to compare it to previous results, set new medications, update or define a schedule for a visit. Whenever a faulty signal of any of the vital signs is treated, an automatic message is sent to one of the patient relatives and to his doctor for a fast intervention. The tested results showed an almost error free system with an accuracy above 95%.

Journal ArticleDOI
TL;DR: A comparison between EEG acquired during the two brain states involving Urdu script writing which is the mother tongue of the subject and English script writing is presented and further classifies them.
Abstract: Brain-Computer Interface (BCI) technology can provide basis for a new non-muscular communication and control options for people suffering from neuromuscular disorder using their electroencephalographic (EEG) activity. The work proposed in this paper presents a comparison between EEG acquired during the two brain states involving Urdu script writing which is the mother tongue of the subject and English script writing and further classifies them. Features of energy and IQR have been adopted to differentiate between the two script writing tasks. Features were computed in frequency range showing exceptional difference in amplitudes in both the writing case on Fourier transforms. The frequency range was retained from the decomposition process carried out using wavelet packet decomposition. The results gave a classification accuracy of 75%.

Journal ArticleDOI
Priya Rani1, E R Rajkumar1
TL;DR: This work involves collection of ROP images, segmentation of features, as texture features, colour features and shape features, and finally feeding the features into the classifier to perform classification of the different stages.
Abstract: Retinopathy of Prematurity (ROP) is an ocular disease in premature infants and leads to blindness at its threshold stages. Thus, it should be diagnosed and treated at the right time to save the infants from permanent visual impairment. The aim of this work is to develop an efficient ROP stage detection tool. The work involves collection of ROP images, segmentation of features, as texture features, colour features and shape features, and finally feeding the features into the classifier to perform classification of the different stages. In this work, the classifier used is Back Propagation Neural Network (BPNN), and classification has been done into stages 3, 4 and 5 which mark the severity of the disease and call for immediate treatment. The results thus obtained are promising; hence, this work forms the basis for development of a semi-automated tool for the diagnosis of ROP.

Journal ArticleDOI
TL;DR: In this paper, a study was conducted to determine whether cardiovascular measures and task performance differ in listening to pleasant and unpleasant music and found that listening to unpleasant music increased the cardiovascular measures resulting in the reduction in task performance.
Abstract: Objective of the present study was to determine whether cardiovascular measures and task performance differ in listening to pleasant and unpleasant music. Ten healthy adults participated in this study. Cardiovascular measures like heart rate, respiratory rate, mean arterial pressure and oxygen saturation of the participants were measured during silence and listening to pleasant and unpleasant music either with or without task performance. The task performance were determined by calculating the errors of commission and omission for Go and No-Go trials. The heart rate, respiratory rate and mean arterial pressure were significantly (p<0.05) higher while listening to unpleasant music when compared to pleasant music. The error of omission was significantly (p=0.008) high for unpleasant music with task performance. The error of commission was found to be significantly (p<0.05) high for unpleasant music with task performance as compared to listening pleasant music and silence. Performance on the task was better following the pleasant music than the silence. The result showed that listening to unpleasant music increased the cardiovascular measures resulting in the reduction in task performance. The individuals who listened to pleasant music have improved some aspects of task performance than performing task in control condition.

Journal Article
TL;DR: This study proposes Multi-Layer Perceptron Neural Network (MLPNN) optimisation using Genetic Algorithm (GA) to classify ECG arrhythmia, which optimises learning rate and momentum.
Abstract: Cardiac arrhythmia indicates the susceptibility of serious heart disease and stroke. Early diagnosis of cardiac arrhythmia helps administering aid to the patients avoiding cardiac complications. An Electrocardiogram (ECG) helps in identifying cardiac arrhythmia. Automated arrhythmia detection was developed in the past few decades attempting to simplify the monitoring task and improve diagnostic efficiencies. ECG arrhythmia detection accuracy improves with the use of machine learning and data mining methods. Several algorithms were developed for the detection and classification of the ECG signals. This study proposes Multi-Layer Perceptron Neural Network (MLPNN) optimisation using Genetic Algorithm (GA) to classify ECG arrhythmia. Symlet is used to extracts R-R intervals from ECG data as features, while symmetric uncertainty assures feature reduction. GA optimises learning rate and momentum. Simulated Annealing (SA) is applied to refine the population of GA.

Journal ArticleDOI
TL;DR: This paper presents a computer-aided diagnosis system to classify the mammograms into three different densities including fatty, glandular and dense to improve the accuracy of breast cancer detection.
Abstract: Mammography is the widely used technique in breast cancer diagnosis. This paper presents a computer-aided diagnosis system to classify the mammograms into three different densities including fatty, glandular and dense. Mammographic density is an important factor of breast cancer risk. Higher breast densities increase the difficulty of detecting cancer in a mammogram. The accuracy of breast cancer detection depends on the breast tissue characteristics. Several texture features such as histogram, local binary pattern, gray-level co-occurrence matrix, gray-level difference matrix, gray-level run-length matrix, Gabor transform and discrete wavelet transform were extracted from the mammograms. In this work, correlation-based feature selection technique was used. The breast tissue classification based on texture features was evaluated by artificial neural network, linear discriminant, Support Vector Machine (SVM) and Naive Bayes classifier. The performance of the proposed method was examined using the Mammogram Image Analysis Society (MIAS) database. Experimental results demonstrate that the best performance was achieved by SVM yielding an accuracy of 96.11%.

Journal ArticleDOI
TL;DR: It has been observed that the performance of ANN classifier in terms of classification accuracy and time required to classify is the best among the three classifiers considered for EMG signal analysis.
Abstract: In the design of a health monitoring system, Electromyography (EMG) signal is one of the key parameters. So it is very important to utilise the EMG signal carefully. In this paper, different classification methods have been used to classify the EMG signals. EMG signals have been extracted from five different subjects corresponding to their eight different motions of the right hand using LabVIEW. The classification techniques used includes k-NN, naive Bayes and Artificial Neural Network (ANN) classifiers. Five feature vectors used are mean absolute value, average band power, standard deviation, peak to peak root mean square value and root mean square value to learn the classifier. From the results obtained, it has been observed that the performance of ANN classifier in terms of classification accuracy and time required to classify is the best among the three classifiers considered for EMG signal analysis. ANN has 100% classification efficiency for classification of EMG signals obtained from different subjects relative to their hand motion. Based upon better classification efficiency, a better health monitoring system can be manufactured.

Journal ArticleDOI
TL;DR: An algorithm is developed to isolate each of sounds S1 or S2 to provide an assessment of their duration, the interval between their internal components and an estimation of their spectral parameters for estimation of pulmonary artery systolic pressure.
Abstract: In order to further highlight heart sound signals analysis, we developed an algorithm to isolate each of sounds S1 or S2 to provide an assessment of their duration, the interval between their internal components and an estimation of their spectral parameters for estimation of pulmonary artery systolic pressure, for possible discrimination depending on the severity of pathological cases for different heart sound signals.

Journal Article
TL;DR: The goal of this research is to develop and test an ultrasound-based gait tachography system to enable the doctors and physiotherapists to evaluate lower limb extremity problems.
Abstract: Gait analysis is an approach towards analysis of the structure and function of the foot, lower limb and body during walking or running. The aim of gait analysis in rehabilitation centres is much larger than only simply a functional assessment tool as it can help us determine the complex relationships between impairment, functional limitation and disability. The goal of this research is to develop and test an ultrasound-based gait tachography system to enable the doctors and physiotherapists to evaluate lower limb extremity problems. Gait tachography employ Doppler frequency shifting principle to calculate the change in velocity of body's centre of gravity during gait. The ultrasound-based gait tachography includes simple instrumentation, composed of a transmitter and a receiver block. The ultrasonic gait tachograph is cost efficient, comparatively small, lightweight, does not hamper a person's psychology much and is suitable to be used in non-laboratory situation.

Journal ArticleDOI
TL;DR: A Tree-Based Access Control (TBAC) approach for fine-grained and secure access of the PHR in the cloud environment and Tree-based Group Diffie-Hellman (TBGDH) algorithm is used to generate the key instance for the encryption process.
Abstract: Personal Health Record (PHR) system is a currently emerging patient-oriented model for sharing the health information through a cloud environment. Previously, single attribute authority-based security scheme was used for sharing the PHRs in the cloud. But, this security scheme is not practically applicable due to the security and privacy issues. The existing access control approaches require more time to encrypt and decrypt the PHR file. This paper proposes a Tree-Based Access Control (TBAC) approach for fine-grained and secure access of the PHR in the cloud environment. In our approach, Tree-based Group Diffie-Hellman (TBGDH) algorithm is used to generate the key instance for the encryption process. The Attribute-based Encryption (ABE) approach is used with different hierarchical levels of the users to protect the personal health data. The access policies are based on the user attribute.

Journal ArticleDOI
P.V. Jayaram1, R. Menaka1
TL;DR: This work proposes an approach to detect the presence of ischemic lesion in the tissue part of the brain by employing Skull Elimination Algorithm (SEA), Central Line Sketching Al algorithm (CLSA), Fuzzy C-means (FCM) clustering-based segmentation and Discrete Orthonormal Stockwell Transform (DOST).
Abstract: This work proposes an approach to detect the presence of ischemic lesion in the tissue part of the brain. Accurate classification and segmentation of stroke affected regions are essential for quick diagnosis. Image classification is an important step for high-level processing of automatic brain stroke classification. The proposed method employs Skull Elimination Algorithm (SEA), Central Line Sketching Algorithm (CLSA), Fuzzy C-means (FCM) clustering-based segmentation and Discrete Orthonormal Stockwell Transform (DOST). The skull elimination and CLSAs are the main stages of preprocessing. The skull elimination was mainly adopted for extracting only the tissue part in the brain and the CLSA is used for splitting the Magnetic Resonance Image (MRI) into two equal sections. FCM-based segmentation is mainly used for extracting the lesion part. Then in the next stage DOST is applied into left and right sections of brain image for extracting the features such as mean, median and standard deviation which classifies the normal and abnormal MRI.

Journal ArticleDOI
TL;DR: A mathematical model is developed, providing an understanding of how the strut embedment, diffusivity and reversible binding affect the distribution and binding of drug in the arterial tissue, and results are consistent with those of previous investigations which further validate the applicability of the present model.
Abstract: Of concern is the transport of drug into and through the arterial wall from an embedded drug-eluting stent (DES), having struts of circular cross-section. The presence of the specific binding site action is modelled using a reversible chemical reaction. A mathematical model is developed where the free drug transport is considered as an unsteady convection-diffusion-reaction process, while the bound drug as a reaction-diffusion process. An explicit finite difference scheme has been used to tackle the governing equations of motion together with the realistic boundary conditions. Results include a parametric study, showing the spatio-temporal drug uptake and its retention in the arterial tissue, providing an understanding of how the strut embedment, diffusivity and reversible binding affect the distribution and binding of drug in the arterial tissue. The graphical results are consistent with those of previous investigations which further validate the applicability of the present model.

Journal Article
TL;DR: The proposed prototype provides instantaneous force measurements imposed on the Ureter wall during the stone removal procedure, and thereby, aids in reducing the risk of ureteral perforations and avulsions by highlighting the safety and hazardous extraction forces.
Abstract: Ureteroscopic stone extraction devices are effective and ubiquitous tools in the management of urolithiasis. Ureteroscopy, however, has the potential to cause injury to the ureter. Perforation, and on some occasions, avulsion of the ureter as a result of excessive forces on the extraction device are some serious complications of this practice. In this paper, we propose the integration of a force sensor with stone extraction devices. The proposed prototype provides instantaneous force measurements imposed on the ureter wall during the stone removal procedure, and thereby, aids in reducing the risk of ureteral perforations and avulsions by highlighting the safety and hazardous extraction forces. A prototype was built and a bench top test was performed. The results obtained from the bench top tests were consistent with the results reported in previous works in the literature.

Journal Article
TL;DR: Simulation results show that accuracy is improved for segmentation of lobes from lungs in CT scan image and Neural Network is used as a non-linear filter and it is trained using Back Propagation algorithm for the enhancement of fissure.
Abstract: Computed tomography technology is one of the most efficient techniques, which shows the inside parts of the human body through scanning the specific area. The CT image normally shows the detailed information of the lungs, which is used for surgical planning. Segmentation of lobes from lung is a very challenging task in CT scan image when the abnormalities or anomalies are presented in the lung image. There are various problems in the fissure enhancement process such as incomplete fissure, partially seen fissure, etc. Back Propagation Neural Network (BPNN) is used for fissure enhancement process. By using the fissure enhanced image, the lobe is segmented using canny edge detection method. Neural Network is used as a non-linear filter and it is trained using Back Propagation algorithm for the enhancement of fissure. Simulation results show that accuracy is improved for segmentation of lobes from lungs.

Journal ArticleDOI
TL;DR: This paper focuses the literature on computer analysis of abnormal pulmonary CT with three main steps: pre-processing, segmentation of nodule candidates and nodule classification, and the challenges, limitations and future directions are discussed.
Abstract: In medical imaging, Computer-Aided Detection (CAD) aims to improve diagnostic decision, detection performance and nodule detection. Computed Tomography (CT) technology allows us to gain isotropic acquisition of complete chest within a single breath hold. Analysis of data becomes time consuming for manual interpretation. Hence, automation of data of the CT images is necessary. Lesions in lung are potential manifestations of lung cancer, and early detection helps to increase the chance of survival rate. This paper focuses the literature on computer analysis of abnormal pulmonary CT. All these work deals with three main steps: pre-processing, segmentation of nodule candidates and nodule classification. In addition, the challenges, limitations and future directions are discussed.

Journal ArticleDOI
TL;DR: A novel hybrid algorithm based on wavelet transform based on lifting scheme and principal component analysis for medical image indexing shows an efficiency of 95%, which is significantly higher than recent methods in CBIR domain.
Abstract: Medical equipment technologies produce a vast number of images that are stored in large databases; efficient indexing algorithms are required to access these databases. This paper proposes a novel hybrid algorithm for medical image indexing. The hybridisation of wavelet transform based on lifting scheme and principal component analysis has been used in some image processing area but they have not been used for image indexing. Wavelet transform is used to decompose images, then principal component analysis method is applied to extract pertinent components. The extracted features are used to create image signature. Finally, image is retrieved by comparing the signatures of query image and all images databases using Euclidean distance. We haves tested our algorithm on the retinal image, cerebral and melanoma databases. The results obtained by our algorithm are compared with several published methods cited in the literature and shows an efficiency of 95%, which is significantly higher than recent methods in CBIR domain.