scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Biomedical Engineering and Technology in 2017"


Journal ArticleDOI
TL;DR: An efficient scheme for denoising electrocardiogram (ECG) signals is proposed based on a wavelet-based threshold mechanism based on an opposition-based self-adaptive learning particle swarm optimisation (OSLPSO) in dual tree complex wavelet packet scheme, in which the OSL PSO is utilised to for threshold optimisation.
Abstract: Electrocardiogram (ECG) signal is significant to diagnose cardiac arrhythmia among various biological signals. The accurate analysis of noisy electrocardiographic (ECG) signal is a very motivating challenge. According to this automated analysis, the noises present in electrocardiogram signal need to be removed for perfect diagnosis. Numerous investigators have been reported different techniques for denoising the electrocardiographic signal in recent years. In this paper, an efficient scheme for denoising electrocardiogram (ECG) signals is proposed based on a wavelet-based threshold mechanism. This scheme is based on an opposition-based self-adaptive learning particle swarm optimisation (OSLPSO) in dual tree complex wavelet packet scheme, in which the OSLPSO is utilised to for threshold optimisation. Different abnormal and normal electrocardiographic signals are tested to evaluate this approach from MIT/BIH arrhythmia database, by artificially adding white Gaussian noises with variation of 5 dB, 10 dB and 15 dB. Simulation results illustrate that the proposed system has good performance in various noise level and obtains better visual quality compared with other methods.

192 citations


Journal ArticleDOI
TL;DR: The intention of this paper is application-oriented architecture for big data systems, which is based on a study of published big data architectures for specific use cases, and an overview of the state-of-the-art machine learning algorithms for processing big data in healthcare and other applications.
Abstract: Big Data has gained much attention from researchers in healthcare, bioinformatics, and information sciences. As a result, data production at this stage will be 44 times greater than that in 2009. Hence, the volume, velocity, and variety of data rapidly increase. Hence, it is difficult to store, process and visualise this huge data using traditional technologies. Many organisations such as Twitter, LinkedIn, and Facebook are used big data for different use cases in the social networking domain. Also, implementations of such architectures of the use cases have been published worldwide. However, a conceptual architecture for specific big data application has been limited. The intention of this paper is application-oriented architecture for big data systems, which is based on a study of published big data architectures for specific use cases. This paper also provides an overview of the state-of-the-art machine learning algorithms for processing big data in healthcare and other applications.

102 citations


Journal ArticleDOI
TL;DR: The experimental results show the performance of the proposed methodology on Pima Indian Diabetes Dataset (PIDD) and provide better classification for diagnosis of diabetes patients on PIDD.
Abstract: The modern society is prone to many life-threatening diseases, which if diagnosed early, can be easily controlled. The implementation of a disease diagnostic system has gained popularity over the years. The main aim of this research is to provide a better diagnosis of diabetes disease. There are already several existing methods, which have been implemented for the diagnosis of diabetes dataset. Here, the proposed approach consists of two stages: in first stage Genetic algorithm (GA) used as an attribute (feature) selection which reduces 4 attributes among 8 attributes, and in the second stage Radial Basis Function Neural Network (RBF NN) has been used for classification on selected attributes among all the attributes. The experimental results show the performance of the proposed methodology on Pima Indian Diabetes Dataset (PIDD) and provide better classification for diagnosis of diabetes patients on PIDD. GA is removing insignificant features, reducing the cost and computation time and improving the accuracy, ROC of classification. The proposed method can also be used for other kinds of medical diseases.

30 citations


Journal ArticleDOI
TL;DR: DWT has the potential to be used as real-time EOG-based emotion assessment system and shows best performance with the combination DWT+SVM and Hjorth+NB for each of the emotions.
Abstract: In this study, for recognition of (positive, neutral and negative) emotions using EOG signals, subjects were stimulated with audio-visual stimulus to elicit emotions. Hjorth parameters and Discrete Wavelet Transform (DWT) (Haar mother wavelet) were employed as feature extractor. Support Vector Machine (SVM) and Naive Bayes (NB) were used for classifying the emotions. The results of multiclass classifications in terms of classification accuracy show best performance with the combination DWT+SVM and Hjorth+NB for each of the emotions. The average SVM classifier's accuracy with DWT for horizontal and vertical eye movement are 81%, 76.33%, 78.61% and are 79.85%, 75.63% and 77.67% respectively. The experimental results show the average recognition rate of 78.43%, 74.61%, and 76.34% for horizontal and 77.11%, 74.03%, and 75.84% for vertical eye movement when Naive Bayes group with Hjorth parameter. Above result indicates that it has the potential to be used as real-time EOG-based emotion assessment system.

12 citations


Journal ArticleDOI
TL;DR: The proposed SLO approach automatically extracts both the true retinal area and artefacts of the image based on image processing and machine learning approach.
Abstract: Earlier detection and treatment of the retinal disease are crucial to avoid preventable vision loss. Scanning laser ophthalmoscopes (SLOs) can be used for early detection of retinal diseases and the SLO can image a large part of the retinal image to diagnose it in a better way. Eyelashes and eyelids are also imaged with the retinal image during the image process used to exclude the artefacts of the retinal image. The proposed SLO approach automatically extracts both the true retinal area and artefacts of the image based on image processing and machine learning approach. Superpixel and Deep Neural Network (DNN) are used to reduce the complexity of image processing tasks, the result is being provided with a primitive image pattern. The framework performs the calculations of textural and structural-based information of features and this approach results in effective analysis of retinal area and the artefacts.

11 citations


Journal ArticleDOI
TL;DR: The results revealed the effectiveness of the suggested time-frequency-based analysis method to detect wide range of emotions using EEG signals.
Abstract: Emotion detection has crucial role in many domains especially in health and e-learning sector. This study aims to improve the accuracy in detecting emotions using brain activity. It addresses two primary problems associated with current emotion recognition systems. Firstly, these existing systems can classify only small classes of emotion. Secondly, analysis of the EEG is complex due to its non-stationary and non-linear characteristics. We conducted experiments to record EEG of subjects using 14 electrodes attached directly to the scalp based on International 10-20 system. To remove artefacts, raw signals are pre-processed. Emotional patterns associated with EEG are detected on time-frequency domain using Hilbert-Huang Transform (HHT). Multiclass Support Vector Machine classifier (MC-SVM) is used to distinguish emotions from recorded data based on the instantaneous frequency obtained through HHT. The results revealed the effectiveness of the suggested time-frequency-based analysis method to detect wide range of emotions using EEG signals.

9 citations


Journal ArticleDOI
TL;DR: The study analysed the mechanical properties by uniaxial test of porcine and bovine pericardial specimens, stabilised with the glutaraldehyde or with the ethylene glycol diglycidyl ether, to make conclusions about the desirability of the use of Porcinepericardium treated with glutARaldehyde for transcatheter aortic valve prostheses.
Abstract: The study analysed the mechanical properties by uniaxial test of porcine and bovine pericardial specimens, stabilised with the glutaraldehyde or with the ethylene glycol diglycidyl ether. Also the paper describes, the ability of these samples compressed to 18 Fr with via the finite element method. Analysis of the properties of the biomaterial demonstrated a low strength for all types of porcine pericardium with the lowest value for the EGDE-treated pericardium (2.75 N). The lowest stress-strain state during the simulation of the leaflet crimping was observed in experimental porcine pericardial patch with the minimum principal logarithmic strain of 0.32 m/m. The results of this study allow us to make conclusions about the desirability of the use of porcine pericardium treated with glutaraldehyde for transcatheter aortic valve prostheses.

9 citations


Journal ArticleDOI
TL;DR: This study analyses and processes the photoplethysmographic signal waveform in order to evaluate the arterial stiffness and shows that the AI and the b/a ratio are not in the same argument for the healthy and pathologic subject.
Abstract: This study analyses and processes the photoplethysmographic (PPG) signal waveform in order to evaluate the arterial stiffness. In fact the PPG signal carries out some determinant physiological parameters which are used in evaluating arterial stiffness. These parameters are determined through the processing and analysis of the PPG signal. The determined physiological parameters are the augmentation index AI and the b/a ratio. The work is in fact related to the determination and the analysis of the AI and the b/a ratio. The acquired PPG signal is first filtered using the Undecimated Wavelet Transform (UWT), then its second derivative is calculated followed by a detection of peak and valleys in order to determine AI and b/a ratio. These parameters are then evaluated and analysed for different subjects healthy and pathologic. The obtained result shows that the AI and the b/a ratio are not in the same argument for the healthy and pathologic subject.

8 citations


Journal ArticleDOI
TL;DR: In this article, a variational framework was proposed for removal of Rician noise from MRI, where the first term is obtained by minimising the negative log likelihood of the Rician pdf and the second term is a prior function which is a nonlinear complex diffusion based filter.
Abstract: In this paper, a new method is proposed for removal of Rician noise from MRI. Method is casted into a variational framework, consists of two terms wherein the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimising the negative log likelihood of Rician pdf. Owing to ill-posedness of the likelihood term, a prior function is introduced which is a nonlinear complex diffusion based filter. A regularisation parameter is used to balance the trade-off between data fidelity term and prior. The performance analysis of the proposed method with other standard methods is presented for Brain Web dataset. The values of performance measures such as PSNR, RMSE, SSIM, and CP are presented for various noise levels. From the simulation results, it is observed that the proposed method is performing better in comparison to other methods in consideration.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a low-dose X-ray CT image reconstruction method based on statistical sonogram smoothing approach is proposed. But the proposed method is casted into a variational framework and the solution of the method is based on minimisation of energy functional.
Abstract: Pre-processing the noisy sinogram before reconstruction is an effective and efficient way to solve the low-dose X-ray Computed Tomography (CT) problem. The objective of this paper is to develop a low-dose CT image reconstruction method based on statistical sonogram smoothing approach. The proposed method is casted into a variational framework and the solution of the method is based on minimisation of energy functional. The solution of the method consists of two terms viz. data fidelity term and a regularisation term. The data fidelity term is obtained by minimising the negative log likelihood of the signal dependent Gaussian probability distribution which depicts the noise distribution in low dose X-ray CT. The second term i.e. regularisation term is a non-linear CONvolutional Virtual Electric Field Anisotropic Diffusion (CONVEF-AD) based filter which is an extension of Perona-Malik (P-M) anisotropic diffusion filter. The main task of regularisation function is to address the issue of ill-posedness of the solution for the first term. The proposed method is capable of dealing with both signal dependent and signal independent Gaussian noise i.e. mixed noise. For experimentation purpose, two different sinograms, generated from test phantom images are used. The performance of the proposed method is compared with that of existing methods. The obtained results show that the proposed method outperforms many recent approaches and is capable of removing the mixed noise in low dose X-ray CT.

7 citations


Journal ArticleDOI
TL;DR: An optimised pixel-based classification by the cooperation of region growing strategy is proposed that allows obtaining a nucleus and cytoplasm segmentation as closer to what as expected in the reference images.
Abstract: The pixel-based classification is an automatic approach for classifying all pixels in the image but does not take into account the spatial information for the region of interest. On the other hand, region-growing methods take into account the neighbourhood pixels information. However, in region-growing methods, a pixel-group called 'points of interest' are needed to initialise the growing process. In this paper, we proposed an optimised pixel-based classification by the cooperation of region growing strategy. This original segmentation scheme is performed in two phases for the automatic recognition of white blood cells (WBC): the first is a learning step with colour characteristics of each pixel in the image. The second is a region growing application by classifying neighbouring pixels from pixels of interest extracted by the ultimate erosion technique. This process has proved that the cooperation allows obtaining a nucleus and cytoplasm segmentation as closer to what as expected in the reference images.

Journal ArticleDOI
TL;DR: The developed model is implemented in the working platform of MATLAB and the output is compared with the existing techniques such as FCM, K-means to evaluate the performance of the proposed system.
Abstract: Segmentation is grouping of a set of pixels, which are mapped from the structures inside the prostate and the background image. The main aim of this research is to provide a better segmentation technique for medical images by solving the drawbacks that currently exist in the density map-based discriminability of feature values. In this paper, we have proposed a method for image segmentation-based density map segmentation properties medical image. The accurateness of the resultant value possibly not up to the level of anticipation while the dimension of the dataset is high because we cannot say that the dataset chosen are free from noises and faults. The kernel change, i.e., segmentation is made by using hybrid K-means clustering algorithm. Thus this method is used to provide the segmentation processing information as well as also be noise free output in an efficient way. Hence, the developed model is implemented in the working platform of MATLAB and the output is compared with the existing techniques such as FCM, K-means to evaluate the performance of our proposed system.

Journal ArticleDOI
TL;DR: A scheme for automated segmentation of FAZ in colour fundus images is proposed and result shows that the proposed scheme has successfully detected the FAZ.
Abstract: One of the diabetes complications, diabetic retinopathy (DR), is characterised by the damage of retinal vessels, especially on the macular region. Located at the centre of retina and appeared as a cloudy dark spot in colour fundus image, macula is a fundamental area for high acumen of colour vision. Foveal avascular zone (FAZ) is located at the centre of macula and encircled by interconnected capillary beds. FAZ has a round or oval shape with an average diameter of 500-600 μm. In DR patients, the FAZ becomes larger due to the loss of perifoveal retinal vessels. In this study, a scheme for automated segmentation of FAZ in colour fundus images is proposed. The scheme consists of four stages: pre-processing, image enhancement, vessels segmentation and FAZ segmentation. Result shows that the average sensitivity, specificity and accuracy obtained are 80.86%, 99.17% and 97.49%. This indicates that the proposed scheme has successfully detected the FAZ.

Journal ArticleDOI
TL;DR: The proposed method redesigns the socket to improve patient comfort using FEA along with reverse engineering techniques, and quantification of location, intensity and distribution of stress-strain on the socket leads to improved socket design.
Abstract: The objective of this work is to identify optimum pressure distribution of the prosthetic socket under specific load using finite element analysis (FEA). In addition, this study includes the topology optimisation of the socket using Altair's OptiStruct software. The socket needs to be flexible, but strong, to permit normal gait movement, but not twist/bend under pressure. Plaster of Paris (PoP) sockets of different clinical cases and below knee (BK) amputees having different stump geometries have been considered in this paper. The CAD model is developed by using point cloud data and meshing approach used for creating a volume mesh. The quantification of location, intensity and distribution of stress-strain on the socket leads to improved socket design. The proposed method redesigns the socket to improve patient comfort using FEA along with reverse engineering techniques. Further, the patients would feel comfortable with lightweight due to customised prosthetic sockets. The results of the study are in sync with the available literature.

Journal ArticleDOI
TL;DR: A new algorithm based on Mean Absolute Deviation (MAD) using lower feature vector dimension and linear classifier is proposed for automatic seizure detection using EEG signals, which results in reduction of number of features per frame with less complexity for all considered problems.
Abstract: Epileptic seizures occur randomly and are difficult to identify in Electroencephalogram (EEG) recording with multiple channels. Most researchers have used large dimension features, complex transformation techniques and non-linear classifier. A new algorithm based on Mean Absolute Deviation (MAD) using lower feature vector dimension and linear classifier is proposed for automatic seizure detection using EEG signals. The proposed method calculates MAD of each channel on frame consisting of 256 samples. In order to reduce the dimension of the feature, mean and maximum value of the MAD for all channels were selected as discriminating parameters. The proposed algorithm is tested on a publicly available Bonn University EEG database for three cases. The accuracy of the algorithm was 100% in all the considered problems. The proposed work outperforms in terms of complexity with respect to the other available state-of-the-art method on the same database. It results in reduction of number of features per frame with less complexity for all considered problems.

Journal ArticleDOI
TL;DR: This paper uses the predictive analysis algorithm in the Hadoop/MapReduce environment to predict the DM complexities and the type of treatment to be adopted and provides an efficient way to cure patients with better outcomes.
Abstract: Owing to the increasing developments in this digitised era, it is necessary to move from paper health records to digital by handling the large volume of healthcare data for analysis, and using them for efficient treatment will be a crucial issue. Diabetes Mellitus (DM) is one of the Non-Communicable Diseases (NCD). It is a major health hazard in developing countries and associated with long-term complications and numerous health disorders. The main idea of this project is to integrate massive unstructured diabetic data from various sources which need to be normalised into a proper scale to get optimised solution for medicinal field using Hadoop. Although the traditional database management systems can handle data effectively, processing the high volume of unstructured data at a reasonable time becomes very challenging. This paper uses the predictive analysis algorithm in the Hadoop/MapReduce environment to predict the DM complexities and the type of treatment to be adopted. Based on the analysis, this system provides an efficient way to cure patients with better outcomes.

Journal ArticleDOI
TL;DR: FG may offer a performance benefit on artificial turf compared to AG and TF on natural turf, however, increased knee valgus angle and decreased knee flexion angle of FG may increase knee loading and risk of anterior cruciate ligament (ACL) injury.
Abstract: The purpose of this study was to test for differences in performance and injury risks between three different outsole configuration soccer shoes on natural turf. A total of 14 experienced soccer players participated in the tests. Participants were asked to complete tasks of straight-ahead running and 45° left sidestep cutting respectively at the speed of 5.0±0.2 m/s on natural turf. They selected soccer shoes with firm ground design (FG), artificial ground design (AG) and turf cleats (TF) randomly. During 45° cut, FG showed significantly smaller peak knee flexion and greater abduction angles than TF. FG showed significant greater peak horizontal ground reaction force (GRF) and average required traction ratio compared with AG and TF. FG may offer a performance benefit on artificial turf compared to AG and TF on natural turf. However, increased knee valgus angle and decreased knee flexion angle of FG may increase knee loading and risk of anterior cruciate ligament (ACL) injury. Higher vertical average loading rate and excessive plantar pressure of FG may also resulted in calluses observed in plantar skin, forefoot pain or even metatarsal stress fracture. In summary, FG would enhance athletic performance on natural turf, but also may undertake higher risks of non-contact injuries compared with AG and TF.

Journal ArticleDOI
TL;DR: This work mainly emphasises the use of MFCC & MF-PLP features at the front end and HMM & K-means clustering at the back end and proves that it gives better results as compared to HMM.
Abstract: This paper presents the development of the robust speech recognition system for the children with hearing impairment. It is a challenging task to recognise the distorted speeches of the hearing impaired since the characteristics of the speeches uttered by these people normally have variations in terms of accent, pronunciation and speed. Because of their inability to hear, they are not able to speak even though their nasal and oral cavities aiding for the speech production are perfect like normal persons. This work mainly emphasises the use of MFCC & MF-PLP features at the front end and HMM & K-means clustering at the back end. Performance of the system is evaluated and compared for the two modelling techniques and recognition accuracy is 94%, 97% and 84% for MFCC with HMM and accuracy is 98.3%, 93.5% and 93.6% for MF-PLP with K-means clustering for recognition system developed for recognising isolated digits, connected words and continuous speeches of hearing impaired. Noteworthy point to be mentioned is that, though the clustering technique is an old technique, it is proved that it gives better results as compared to HMM.

Journal ArticleDOI
P.J. Kumar1, P. Ilango1
TL;DR: An efficient Multimedia QoS aware replication algorithm is proposed which allocates replica considering the QoS requirements of multimedia data such as delay, jitter, bandwidth, loss rate and error rate and a significant reduction in the number of QoS violated replicas is obtained.
Abstract: Multimedia data need stringent Quality of Service (QoS) requirements to handle it effectively. Replication is performed to enhance data availability and fault tolerance. Selection of node for replication is important to place multimedia content. Several replication algorithms have been proposed in the literature to replicate data in different computing architectures such as Client-Server distributed systems, Peer to Peer, Wireless Sensor networks, Mobile Ad hoc networks, Vehicular Ad hoc network and Cloud. The underlying difference in the architecture demands a custom replication approach for each of computing/network environment. Since multimedia data require a stringent QoS measures, we propose an algorithm to replicate Multimedia data with QoS awareness in the cloud environment (MQRC). We study the architecture of Hadoop Distributed File System (HDFS) and propose an efficient Multimedia QoS aware replication algorithm which allocates replica considering the QoS requirements of multimedia data such as delay, jitter, bandwidth, loss rate and error rate. We have performed a simulation using the proposed approach. The results obtained shows a significant reduction in the number of QoS violated replicas compared to the existing approach used by Hadoop such as Random replication.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an algorithm for R peak detection using discrete wavelet transform in which detailed coefficients are selected based on entropy and validated with MIT-BIH database and its performance is compared with similar work.
Abstract: Investigation of patient's electrocardiogram helps to diagnose various heart related diseases. With correct R peak detection in ECG wave, classification of arrhythmia can be carried out accurately. However, accurate R peak detection is a big challenge especially in wireless patient monitoring system. In wireless ECG system, in order to reduce the power consumption; it is desirable to capture ECG at lower sampling rate. This paper proposes an algorithm for R peak detection using discrete wavelet transform in which detailed coefficients are selected based on entropy. The proposed algorithm is validated with MIT-BIH database and its performance is compared with similar work. For MIT-BIH case, positive predictivity and sensitivity for proposed algorithm are 99.85 and 99.73, respectively. Application of proposed algorithm on wireless ECG, acquired at adjustable sampling rate from different subjects using prototype Bluetooth ECG module, shows efficacy of algorithm to detect R peak of ECG with high accuracy.

Journal ArticleDOI
TL;DR: A joint analysis of EEG interictal events and automatic spike extraction for epileptic source localisation and a good conformance in localising the epileptogenic regions is shown.
Abstract: Interictal spike extraction and epileptic source localisation is an important neuroimaging problem. Manual analysis of EEG interictal spikes is time-consuming and imposes an overwhelming workload for the physician. To overcome this problem, we investigate a joint analysis of EEG interictal events and automatic spike extraction for epileptic source localisation. In this work, Multivariate Empirical Mode Decomposition (MEMD) is applied to each event separately and the Intrinsic Mode Functions (IMFs) are extracted. The joint analysis combines IMFs of different interictal events to a common empirical mode domain. A power threshold step recovers the interictal spikes in the joint signal by filtering the background EEG activity and the noises. Experimental analysis of four epileptic patient data set shows that the filtered EEG signal indicates a significant improvement in the Signal to Noise Ratio (SNR) (p < 0.0047(t-test), min: 35.3283 dB and max: 52.2506 dB). The results of this paper show a good conformance (concordance rate, min:40% and max:80%) in localising the epileptogenic regions.

Journal ArticleDOI
TL;DR: The purpose is to develop a method based on the computation of Haralick's textural parameters in order to characterise and analyse blood cells by statistical methods, allowing to identify anomalies by distinguishing healthy and abnormal cells which can be considered as potential cancerous cells.
Abstract: The parameters extraction defined by Haralick in textural analysis of bio-images often precedes a decision step to be able to distinguish normal or defective tissues, healthy or pathological biological cells and the types of defects. In this paper, we focus on the textural analysis of medical images, in order to detect abnormal blood cells using grey-level co-occurrence matrix (GLCM). The textural analysis is performed by quantifying correlations and relationships between grey levels of pixels depending on the distance. Our purpose is to develop a method based on the computation of Haralick's textural parameters in order to characterise and analyse blood cells by statistical methods. The main goal is to provide textural analysis to help haematologists to make a precise diagnosis allowing to identify anomalies by distinguishing healthy and abnormal cells which can be considered as potential cancerous cells. The results described in Figures 3-7 show one set of significant experimental results which are useful to differentiate between healthy and abnormal cells, and where the strong main parameter is the energy.

Journal ArticleDOI
TL;DR: A quantitative model is developed to predict the lung nodules which have the potential to grow in future and the rate of nodule growth (RNG) was computed on real nodules in terms of 3D-volume change.
Abstract: A quantitative model is developed in this work to predict the lung nodules which have the potential to grow in future. An Auto Cluster Seed K-means Morphological segmentation (ACSKMM) algorithm was implemented in this work to segment all the possible lung nodule candidates. An average of around 600 nodule candidates of size >3mm were segmented from each CT scan series of 34 patients. Finally, in total 34 real nodules were remained after eliminating the vessels, non-nodules and calcifications using centroid shift and 3D shape variance analysis. The rate of nodule growth (RNG) was computed on real nodules in terms of 3D-volume change. Out of the 34 real nodules, 3 nodules had RNG value>1, confirming their malignant nature. The nodule growth predictive measure was modelled through compactness, mass deficit, mass excess and isotropic factor.

Journal ArticleDOI
TL;DR: It is found that there is an effect on frontal and parietal lobes during the cognitive task and the features of topological scalp map are found to support the results of MFDFA.
Abstract: This work aims to study the development of stress in different section of brain under various mental stress conditions, while the brain performs variety of cognitive tasks. The cognitive tasks are designed so as to cover a wide range of brain activities, which includes puzzle solving, decision making, mathematical calculation, memorisation and recollection, correlation and matching. The purpose is to see whether different level of stresses could be developed under varied conditions. In this article we use Multifractal Detrended Fluctuation Analysis (MFDFA) to observe the different changes in different lobes of the brain. It is found that there is an effect on frontal and parietal lobes during the cognitive task. In order to confirm the results of MFDFA analysis, we also obtain topological scalp map. The features of topological scalp map are found to support the results of MFDFA.

Journal ArticleDOI
TL;DR: This paper proposes a new technique for detecting atrial fibrillation using SVM with the optimal free parameters and all the proposed electrocardiographic features, and finds an AF detection technique with a comparable performance.
Abstract: This paper proposes a new technique for detecting atrial fibrillation (AF). The method employs electrocardiographic features and support vector machine (SVM). The features include descriptive statistics of electrocardiographic RR interval. The RR interval is the distance in time between two consecutive R-peaks of electrocardiogram. AF detections using SVM with different electrocardiographic features and different SVM free parameters are explored. Employing SVM with the optimal free parameters and all the proposed electrocardiographic features, we find an AF detection technique with a comparable performance. The best performance obtained by the technique is 98.47% and 97.84%, in terms of sensitivity and specificity.

Journal ArticleDOI
TL;DR: A multimodal biometric framework based on fingerprint, iris and face is presented and the proposed hybrid PSO-GA is more flexible and robust and the hybrid strategy avoids premature convergence providing better exploration of the search process.
Abstract: Biometrics recognises a person through physiological/behavioural attributes like fingerprint, face, iris, retina or DNA. Biometrics relates to a human being's physiological/behavioural characteristics and ensures different techniques that capture an individual's identity. Multimodal biometric system combines two/more traits that are not copied, forgotten or stolen. Feature extraction extracts discriminant features from samples which are represented in a feature vector. Feature vector thus got is of high dimension resulting in computation complexity and affecting classifiers performance. To offset this, feature selection obtains an optimal features subset. A multimodal biometric framework based on fingerprint, iris and face is presented in this paper. In this work, features are extracted using Gabor filter, Local Tetra Pattern and feature selection is performed by Genetic Algorithm (GA), Particle Swarm Optimisation (PSO) and proposed Hybrid PSO (HPSO). The proposed hybrid PSO-GA is more flexible and robust and the hybrid strategy avoids premature convergence providing better exploration of the search process. Experimental results demonstrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: The medical care combined with the general cloud technology, mobile technology, built a cloud platform sharing of medical information system, for medical institutions and patients and family members and the other related personnel to provide the fast and efficient service.
Abstract: In this paper, a novel intelligent healthcare system and the sensor network deployment strategy based on the multimodal fused information are analysed. The system could be analysed from two levels, the systematic level and the specific level. From the systematic perspective, the focus is the health monitoring system based on the cloud computing platform. The specific level is the embedded technology in application of the medical equipment. This study will be the medical care combined with the general cloud technology, mobile technology, built a cloud platform sharing of medical information system, for medical institutions and patients and family members and the other related personnel to provide the fast and efficient service which can be reflected from the three aspects. (1) Medical end. The doctor can view the patient's medical records from medical terminal clouds or some other physiological information. (2) The pulse signal detection and processing. (3) Cloud platform management system. Cloud management platform is mainly through the remote medical management system to conduct the core comprehensive business management. The experiment result proves the feasibility of the system.

Journal ArticleDOI
TL;DR: A new, rapid and efficient region-based segmentation method for Liver tumour segmentation initialised using spatial FCM clustering technique is proposed and the obtained results prove its effectiveness in low contrast inhomogeneous tumours segmentation.
Abstract: Accurate and fast image segmentation algorithms are of great importance in medical image processing. In this paper, a new, rapid and efficient region-based segmentation method for Liver tumour segmentation initialised using spatial FCM clustering technique is proposed. In Legendre level sets, the area of interest illumination is represented in lower dimensional subspace. A set of predefined basis functions such as Legendre basis function is used to represent the lower dimensional subspace. This kind of representation enables the robust segmentation of heterogeneous objects even in the presence of noise. The proposed algorithm has been compared with other existing algorithms and its performance evaluation is carried out in CTA abdomen images of various patients. The obtained results prove its effectiveness in low contrast inhomogeneous tumour segmentation.

Journal ArticleDOI
TL;DR: The developmental trend of the microfluidic chip and biosensor technologies and the integration mode with machine learning model and wearable device is analysed and the systematic architecture of mode is analysed that will be basis of further research.
Abstract: In this paper, we analyse the developmental trend of the microfluidic chip and biosensor technologies and the integration mode with machine learning model and wearable device. Bioinformatics is a use of computer technology and information theory methods for the protein and nucleic acid sequence and many kinds of general biological information acquisition, processing, storage, transmission, retrieval, the analysis and the interpretation of the science. Our research discusses the combination modes with neural network, support vector machine, genetic algorithm and the decision tree to provide systematic analysis on the issues. Then, we analyse developmental trend of microfluidic chip and the wearable devices to serve as the nucleus of this research. We establish a database group of scientists are specialists in the field of the bioinformatics, and those who get the biology knowledge from these data are in terms of the special biology experts. In the experiment part, we analyse the systematic architecture of mode that will be basis of further research. The experiment result reflects the systematic framework of proposed general structure of the bioinformatics-related hardware topology.

Journal ArticleDOI
TL;DR: This paper proposes a novel virtual reality and HCI-based intelligent medical equipment system that can sense the outside surrounding environment with 3D reconstruction technique and store the highly challenging data with the proposed database.
Abstract: With development of the science and technology, more and more medical instruments need to be with high speed data acquisition, analysis and the processing mode. The electrocardiogram, electroencephalogram physiological parameters, such as basic testing equipment, various types of monitoring instrument, ultrasonic, X-ray imaging, magnetic resonance imaging equipment, are widely used. This paper starts from the challenge that medical devices have the features of scarcity, valuableness and not easy to operate; we propose a novel virtual reality and HCI-based intelligent medical equipment system. Our method can sense the outside surrounding environment with 3D reconstruction technique and store the highly challenging data with the proposed database. We adopt the VRML to implement the VR system for its advantages of needing standard plug-in, easy to learn and use and the network transmission speed is much faster. In the experimental part, we implement the system and test the overall performance. The result shows that compared with other popular methodologies, our approach obtains better performance theoretically and in the application layer.