scispace - formally typeset
Search or ask a question

Showing papers presented at "Bioinformatics and Bioengineering in 2013"


Proceedings ArticleDOI
01 Nov 2013
TL;DR: A human activity dataset that will be helpful in testing new methods, as well as performing objective comparisons between different algorithms for fall detection and activity recognition, based on inertial-sensor data from smartphones is introduced.
Abstract: Fall detection receives significant attention in the field of preventive medicine, wellness provision and assisted living, especially for the elderly. As a result, numerous commercial fall detection systems exist to date and most of them use accelerometers and/ or gyroscopes attached on a person's body as primary signal sources. These systems use either discrete sensors as part of a product designed specifically for this task or sensors that are embedded in mobile devices such as smartphones. The latter approach has the advantage of offering well tested and widely available communication services, e.g. for calling emergency if necessary, when someone has fallen. Apparently, automatic fall detection will continue to evolve in the following years. The aim of this work is to introduce a human activity dataset that will be helpful in testing new methods, as well as performing objective comparisons between different algorithms for fall detection and activity recognition, based on inertial-sensor data from smartphones. The dataset contains signals recorded from the accelerometer and gyroscope sensors of a latest technology smartphone for four different falls and nine different activities of daily living. Using this dataset, the results of an initial evaluation of three fall detection algorithms are finally presented.

84 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients and a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem.
Abstract: In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%.

57 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: Experimental results showed that analysis of in-air trajectories is capable of assessing subtle motor abnormalities that are connected with PD and conjunction with conventional on-surface handwriting allows to build predictive model with PD classification accuracy over 80%.
Abstract: Parkinsons disease (PD) is neurodegenerative disorder with very high prevalence rate occurring mainly among elderly. One of the most typical symptoms of PD is deterioration of handwriting that is usually the first manifestation of Parkinsons disease. In this study, a new modality - in-air trajectory during handwriting - is proposed to efficiently diagnose PD. Experimental results showed that analysis of in-air trajectories is capable of assessing subtle motor abnormalities that are connected with PD. Moreover, conjunction of in-air trajectories with conventional on-surface handwriting allows us to build predictive model with PD classification accuracy over 80%. In total, we compute over 600 handwriting features. Then, we select smaller subset of these features using two feature selection algorithms: Mann-Whitney U-test filter and relief algorithm, and map these feature subsets to binary classification response using support vector machines.

43 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: This paper presents a methodology to estimate the systolic and diastolic BP levels by only using PPG signals captured with smart phones, which adds to the affordability, usability and portability of the system.
Abstract: As part of preventive healthcare, there is a need to regularly monitor blood pressure (BP) of cardiac patients and elderly people. Mobile Healthcare, measuring human vitals like heart rate, Spo2 and blood pressure with smart phones using the Photoplethysmography technique is becoming widely popular. But, for estimating the BP, multiple smart phone sensors or additional hardware is required, which causes uneasiness for patients to use it, individually. In this paper, we present a methodology to estimate the systolic and diastolic BP levels by only using PPG signals captured with smart phones, which adds to the affordability, usability and portability of the system. Initially, a training model (Linear Regression Model or SVM Model) for various known levels of BP is created using a set of PPG features. This model is later used to estimate the BP levels from the features of the newly captured PPG signals. Experiments are performed on benchmark hospital dataset and data captured from smart phones in our lab. Results indicate that by additionally adding information of height, weight and age play a vital role in increasing the accuracy of the estimation of BP levels.

40 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.
Abstract: There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.

38 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: Through series of case studies, the mathematics that prove 3d printing is feasible for any real life object are walked together, to banish any disbeliefs.
Abstract: 3D printing is the process of being able to print any object layer by layer. But if we question this proposition, can we find any three dimensional objects that can't be printed layer by layer? To banish any disbeliefs we walked together through the mathematics that prove 3d printing is feasible for any real life object. 3d printers create three dimensional objects by building them up layer by layer. The current generation of 3d printers typically requires input from a CAD program in the form of an STL file, which defines a shape by a list of triangle vertices. The vast majority of 3d printers use two techniques, FDM (Fused Deposition Modelling) and PBP (Powder Binder Printing). One advanced form of 3d printing that has been an area of increasing scientific interest the recent years is bioprinting. Cell printers utilizing techniques similar to FDM were developed for bioprinting. These printers give us the ability to place cells in positions that mimic their respective positions in organs. Finally through series of case studies we show that 3d printers in medicine have made a massive breakthrough lately.

37 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: An automated method to detect the epileptic seizures by using an unsupervised method based on k-means clustering end Ensemble Empirical Decomposition (EEMD), indicating overall accuracy 98% and is comparable with other related studies.
Abstract: The detection of epileptic seizures is of primary interest for the diagnosis of patients with epilepsy. Epileptic seizure is a phenomenon of rhythmicity discharge for either a focal area or the entire brain and this individual behavior usually lasts from seconds to minutes. The unpredictable and rare occurrences of epileptic seizures make the automated detection of them highly recommended especially in long term EEG recordings. The present work proposes an automated method to detect the epileptic seizures by using an unsupervised method based on k-means clustering end Ensemble Empirical Decomposition (EEMD). EEG segments are obtained from a publicly available dataset and are classified in two categories “seizure” and “non-seizure”. Using EEMD the Marginal Spectrum (MS) of each one of the EEG segments is calculated. The MS is then divided into equal intervals and the averages of these intervals are used as input features for k-Means clustering. The evaluation results are very promising indicating overall accuracy 98% and is comparable with other related studies. An advantage of this method that no training data are used due to the unsupervised nature of k-Means clustering.

34 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: Nuclear segmentation was significantly improved on histological images (H&E stained breast and intestinal tissue images, Feulgen stained images of prostate tissues) and seeded watershed segmentation is reported to be a simple and computationally efficient segmentation technique.
Abstract: Computer aided diagnosis in cancer pathology (computational pathology) using histological images of biopsies is an emerging field. Segmentation of cell nuclei can be an important step in such image processing pipelines. Although seeded watershed segmentation is a simple and computationally efficient segmentation technique, it is prone to errors like over-segmentation when applied to histological images. We report specific enhancements to this technique to improve segmentation of cell nuclei in histological images. Foreground seeds were generated by fast radial symmetry transform (FRST). Otsu thresholding was used on enhanced image to estimate tentative foreground map. Background markers were computed from the tentative foreground map. False detections in the segmented output were removed by logical AND with the tentative foreground map. Using these enhancements nuclear segmentation was significantly improved on histological images (H&E stained breast and intestinal tissue images, Feulgen stained images of prostate tissues).

33 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: Local Binary Patterns (LBP) is used as features and the results are compared with that of Principal Components Analysis (PCA) and results show that LBP yields a recognition rate of 93 % while PCA gives only 85 %.
Abstract: The ear, as a biometric, has been given less attention, compared to other biometrics such as fingerprint, face and iris. Since it is a relatively new biometric, no commercial applications involving ear recognition are available. Intensive research in this field is thus required to determine the feasibility of this biometric. In medical field, especially in case of accidents and death, where face of patients cannot be recognized, the use of ear can be helpful. In this work, yet another method of recognizing people through their ears is presented. Local Binary Patterns (LBP) is used as features and the results are compared with that of Principal Components Analysis (PCA). LBP has a high discriminative power, tolerance against global illumination changes and low computational load. Experiments were done on the Indian Institute of Technology (IIT) Delhi ear image database and results show that LBP yields a recognition rate of 93 % while PCA gives only 85 %.

29 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: A novel approach based solely on visual information extracted from WCE videos, based on a feature tracking method for visual odometry, which enables the estimation of both the rotation and the displacement of a capsule endoscope from reference anatomical landmarks is proposed.
Abstract: Computational analysis of wireless capsule endoscopy (WCE) videos has already proved its potentials in the discovery or characterization of lesions and in the reduction of the time required by the endoscopists to perform the examination. An open problem that has only partially been addressed is the localization of the capsule endoscope in the gastrointestinal (GI) tract. Previous works have been based mainly on external, wearable, sensors. In this paper we propose a novel approach based solely on visual information extracted from WCE videos. This approach is based on a feature tracking method for visual odometry, which enables the estimation of both the rotation and the displacement of a capsule endoscope from reference anatomical landmarks. Its implementation is based on a novel, open access Java Video Analysis (JVA) framework, which enables quick and standardized development of intelligent video analysis applications. The experimental evaluation presented in this paper, indicates the feasibility of the proposed methodological approach and the efficiency of its implementation.

27 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: The aim is to establish a provider to consumer cloud setting wherein no sensitive data will be exchanged but it will reside at the back-end site and a prototype architecture that covers the cloud management layer and the operational features that manage data and Internet of Things devices is proposed.
Abstract: The emergency of cloud computing and Generic Enablers (GEs) as the building blocks of Future Internet (FI) applications highlights new requirements in the area of cloud services. Though, due to the current restrictions of various certification standards related with privacy and safety of health related data, the utilization of cloud computing in such area has been in many instances unlawful. Here, we focus on demonstrating a “software to data” provisioning solution to propose a mapping of FI application use case requirements to software specifications (using GEs). The aim is to establish a provider to consumer cloud setting wherein no sensitive data will be exchanged but it will reside at the back-end site. We propose a prototype architecture that covers the cloud management layer and the operational features that manage data and Internet of Things devices. To show a real life scenario, we present the use case of the diabetes care and a FI application that includes various GEs.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: This paper proposes a further enhancement of the Probabilistic Latent Semantic Analysis method, which aims at weighting the available associations between genes and functional terms before using them as input to the predictive system.
Abstract: Genomic annotations with functional controlled terms, such as the Gene Ontology (GO) ones, are paramount in modern biology. Yet, they are known to be incomplete, since the current biological knowledge is far to be definitive. In this scenario, computational methods that are able to support and quicken the curation of these annotations can be very useful. In a previous work, we discussed the benefits of using the Probabilistic Latent Semantic Analysis algorithm in order to predict novel GO annotations, compared to some Singular Value Decomposition (SVD) based approaches. In this paper, we propose a further enhancement of that method, which aims at weighting the available associations between genes and functional terms before using them as input to the predictive system. The tests that we performed on the annotations of human genes to GO functional terms showed the efficacy of our approach.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: The obtained results demonstrate the ability of the proposed model to capture the metabolic behavior of a patient with T1DM and to handle intra- and inter-patient variability.
Abstract: The present paper aims at the design, the development and the evaluation of a personalized glucose-insulin metabolism model for patients with Type 1 Diabetes Mellitus (T1DM). The personalized model is based on the combined use of Compartmental Models (CMs) and a Self Organizing Map (SOM). The model receives information related to previous glucose levels, subcutaneous insulin infusion rates and the time and amount of carbohydrates ingested. Previous glucose measurements along with the outputs of the CMs which simulate the sc insulin kinetics and the glucose absorption from the gut into the blood, respectively, are fed into the SOM which simulates glucose kinetics in order for the latter to provide with future glucose profile. The personalized model is evaluated using data from the medical records of 12 patients with T1DM for the time being on insulin pumps and CGMS. The obtained results demonstrate the ability of the proposed model to capture the metabolic behavior of a patient with T1DM and to handle intra- and inter-patient variability.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A visual stimulator using readily available RGB LEDs with clear and frosted glass is proposed, with the latter being tested for performance and qualitative user comfort using electroencephalogram (EEG) data from four subjects.
Abstract: Among the many paradigms used in brain-computer interface (BCI), steady state visual evoked potential (SSVEP) offers the quickest response; however it is disadvantageous from the point of view of visual fatigue, which prevents subjects from prolonged usage of visual stimuli especially when LEDs are used. In this paper, we propose a visual stimulator using readily available RGB LEDs with clear and frosted glass, with the latter being tested for performance and qualitative user comfort using electroencephalogram (EEG) data from four subjects. Furthermore, we also compare frosted and clear stimuli for three colours Red, Green and Blue with frequency values of 7, 8, 9 and 10 Hz. The results using band-pass filtering and Fast Fourier Transform showed that 7 Hz Green clear LED stimuli gave the highest response in general, although all the subjects indicated that they were more comfortable with frosted LED stimuli.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A new multimodal approach based on 2D-3D feature extraction improves WCE capabilities to identify and classify polyps and demonstrates a sensitivity, specificity, and false detection rate of approximately 95%.
Abstract: Wireless capsule endoscopy (WCE) is commonly used for noninvasive gastrointestinal tract evaluation, including the identification of polyps. In this paper, a new multimodal embeddable method for polyp detection and classification in wireless capsule endoscopic images was developed and tested. The multimodal wireless capsule used both 2D and 3D data to identify possible polyps and to deliver cancerous information of the polyps based on 3D geometric features. Possible polyps within the image (2D) were extracted using simple geometric shape features and, in a second step, the candidate regions of interest (ROI) were evaluated with a boosting-based method using textural features. Once the 2D identification of polyps has been performed, the two-class (“malignant” or “begnin”) classification of the polyps is achieved using the 3D parameters computed from the preselected ROI using an active stereo vision system. At this stage, a Support Vector Machine (SVM) classifier is used to proceed to the final classification and to make possible a pre diagnosis. The new proposed multimodal approach based on 2D-3D feature extraction improves WCE capabilities to identify and classify polyps: The boosting-based polyp classification demonstrated a sensitivity of 91%, a specificity of 95% and a false detection rate of 4.8% on a database composed of 300 hundred positive examples and 1200 negative ones; Considering the 3D performance, a large scale demonstrator was evaluated and tested to perform in vitro experiments on an ad hoc polyp database. The performance of the 3D approach achieved a correct classification rate (malignant or benin) of approximately 95%.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: The CAD (computer aided diagnosis) system for the detection of normal and abnormal pattern in the breast consists of four major steps: the image preprocessing, the features extraction, the feature selection and the classification process that classifies mammogram into normal (without tumor) and abnormal (with tumor) pattern.
Abstract: One of the leading causes of cancer death among women is breast cancer. In our work we aim at proposing a prototype of a medical expert system (based on data mining techniques) that could significantly aid medical experts to detect breast cancer. This paper presents the CAD (computer aided diagnosis) system for the detection of normal and abnormal pattern in the breast. The proposed system consists of four major steps: the image preprocessing, the feature extraction, the feature selection and the classification process that classifies mammogram into normal (without tumor) and abnormal (with tumor) pattern. After removing noise from mammogram using the Discrete Wavelet Transformation (DWT), first is selected the region of interest (ROI). By identifying the boundary of the breast, it is possible to remove any artifact present outside the breast area, such as patient markings. Then, a total of 20 GLCM features are extracted from the ROI, which were used as inputs for classification algorithms. In order to compare the classification results, we used seven different classifiers. Normal breast images and breast image with masses (total 322 images) used as input images in this study are taken from the mini-MIAS database.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Results indicate that the unsupervised approach is comparable and sometimes better than supervised (e.g. support vector machine) method for measuring the level of cognitive load on an individual for a given stimulus.
Abstract: Individuals exhibit different levels of cognitive load for a given mental task. Measurement of cognitive load can enable real-time personalized content generation for distant learning, usability testing of applications on mobile devices and other areas related to human interactions. Electroencephalogram (EEG) signals can be used to analyze the brain-signals and measure the cognitive load. We have used a low cost and commercially available neuro-headset as the EEG device. A universal model, generated by supervised learning algorithms, for different levels of cognitive load cannot work for all individuals due to the issue of normalization. In this paper, we propose an unsupervised approach for measuring the level of cognitive load on an individual for a given stimulus. Results indicate that the unsupervised approach is comparable and sometimes better than supervised (e.g. support vector machine) method. Further, in the unsupervised domain, the Component based Fuzzy c-Means (CFCM) outperforms the traditional Fuzzy c-Means (FCM) in terms of the measurement accuracy of the cognitive load.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Use cases and future scenarios realizing the vision for the digital avatar as well as architectural consideration for the envisaged platform are presented.
Abstract: The digital avatar is a vision for the digital representation of personal health status in body centric views. It is designed as an integrated facility that allows collection of, access to and sharing to life-long and consistent data. A number of Virtual Physiological Human (VPH) communities have started the movement to this direction by creating a digital patient road-map and by supporting data sharing infrastructures. As an innovative concept, the impact of digital patient and avatar to personalized medicine and treatment is yet to be clear. This requires a focused and concerted effort in addressing various questions regarding user perspective, use cases and scenarios. This paper presents use cases and future scenarios realizing the vision for the digital avatar as well as architectural consideration for the envisaged platform.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: The obtained results demonstrate that the best performance was achieved when the weighted k-nearest neighbours classifier was applied to the CVD dataset with the best subset of features selected by the GA, which resulted in high levels of accuracy, sensitivity, and specificity.
Abstract: The purpose of this study is to present a hybrid approach based on the combined use of a genetic algorithm (GA) and a nearest neighbours classifier for the selection of the critical clinical features which are strongly related with the incidence of fatal and non fatal Cardiovascular Disease (CVD) in patients with Type 2 Diabetes Mellitus (T2DM). For the development and the evaluation of the proposed algorithm, data from the medical records of 560 patients with T2DM are used. The best subsets of features proposed by the implemented algorithm include the most common risk factors, such as age at diagnosis, duration of diagnosed diabetes, glycosylated haemoglobin (HbA1c), cholesterol concentration, and smoking habit, but also factors related to the presence of other diabetes complications and the use of antihypertensive and diabetes treatment drugs (i.e. proteinuria, calcium antagonists, b-blockers, diguanides and insulin). The obtained results demonstrate that the best performance was achieved when the weighted k-nearest neighbours classifier was applied to the CVD dataset with the best subset of features selected by the GA, which resulted in high levels of accuracy (0.96), sensitivity (0.80) and specificity (0.98).

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A novel visualization approach for WCE is proposed which enables faster examination of the endoscopic video, while providing a broader field of view, by an algorithm that iteratively samples clusters of consecutive frames from the original video.
Abstract: Wireless capsule endoscopy (WCE) is performed by a swallowable pill capsule equipped with a camera wirelessly transmitting color video frames to an external receiver. The resulting video consists usually of several thousands of frames and its visual examination requires hours of endoscopists' undivided attention. In this paper we propose a novel visualization approach for WCE which enables faster examination of the endoscopic video, while providing a broader field of view. This is achieved by an algorithm that iteratively samples clusters of consecutive frames from the original video. The frames of each cluster are geometrically transformed, so as to generate a seamless collage subsequently projected into a new frame without any information loss. The new frames compose a new WCE video with a smaller number of frames. The video frame collage is based on homography matrix estimation from frame correspondences. The experiments show that the length of the WCE video, and therefore the reading times required can be significantly reduced.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A new algorithm, akin a the discrete optimization method, that relies on the Receiver Operating Characteristics (ROC) Areas Under the Curve (AUCs) computation, is described that explores a concrete application of the algorithm to a bioinformatics problem, i.e. the prediction of biomolecular annotations.
Abstract: Truncated Singular Value Decomposition (SVD) has always been a key algorithm in modern machine learning. Scientists and researchers use this applied mathematics method in many fields. Despite a long history and prevalence, the issue of how to choose the best truncation level still remains an open challenge. In this paper, we describe a new algorithm, akin a the discrete optimization method, that relies on the Receiver Operating Characteristics (ROC) Areas Under the Curve (AUCs) computation. We explore a concrete application of the algorithm to a bioinformatics problem, i.e. the prediction of biomolecular annotations. We applied the algorithm to nine different datasets and the obtained results demonstrate the effectiveness of our technique.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A comparative analysis of the performance of three standard denoising methods like continuous Empirical Mode Decomposition (EMD), Discrete Wavelet Transform (DWT) and Kalman Filter and several indexes such as the Signal-to-Noise Ratio (SNR) show that WT achieved the greatest SNR difference.
Abstract: Electrooculographic (EOG) artefact is one of the most common contaminations of Electroencephalographic (EEG) recordings. The corruption of EEG characteristics from Blinking Artefacts (BAs) affects the results of EEG signal processing methods and also impairs the visual analysis of EEGs. In this paper, our scope was a comparative analysis of the performance of three standard denoising methods like continuous Empirical Mode Decomposition (EMD), Discrete Wavelet Transform (DWT) and Kalman Filter (KF). In order to evaluate the performance of EMD, DWT and KF of noise reduction and to express the quality of the denoised EEG, we calculate several indexes such as the Signal-to-Noise Ratio (SNR). All the results obtained from noise simulated EEG data show that WT achieved the greatest SNR difference and also the mode mixing issue of EMD affected this method's performance.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: The aim of this work is to present an automated non supervised method for the detection of occlusal caries based on photographic color images that was evaluated using a set of 60 images where 286 areas of interest were manually segmented by an expert.
Abstract: The aim of this work is to present an automated non supervised method for the detection of occlusal caries based on photographic color images. The proposed method consists of three steps: (a) detection of decalcification areas, (b) detection of occlusal caries areas, and (c) fusion of the results. The detection process includes pre-processing of the images, segmentation and post-processing, where objects not corresponding to areas of interest are eliminated through the utilization of rules expressing the medical knowledge. The preprocessing, segmentation and post-processing are differentiated depending on the areas that have to be detected (decalcification or occlusal areas). The method was evaluated using a set of 60 images where 286 areas of interest were manually segmented by an expert. The obtained sensitivity and precision is 92% and 80%, respectively.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Independent Component Analysis is applied on MEG surface signals in controls and children with reading difficulties and are clustered to representative components and coupling measures of mutual information and partial directed coherence are estimated in order to reveal dysfunction of cerebral networks and its related coordination.
Abstract: The understanding of the mechanisms of human brain is a demanding issue for neuroscience research. Physiological studies acknowledge the usefulness of synchronization coupling in the study of dysfunctions associated with reading difficulties. Magnetoencephalogram (MEG) is a useful tool towards this direction having been assessed for its superior accuracy over other modalities. In this paper we consider synchronization features for identifying brain operations. Independent Component Analysis (ICA) is applied on MEG surface signals in controls and children with reading difficulties and are clustered to representative components. Then, coupling measures of mutual information and partial directed coherence are estimated in order to reveal dysfunction of cerebral networks and its related coordination.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Experimental results, performed on benchmark dataset shows that good accuracy in the estimation of ECG parameters can be achieved in the proposed methodology and the overall performance improves in using feature selection technique rather than using all the PPG features for classification.
Abstract: Regular ECG check up is a good practice for cardiac patients as well as elderly people. In this paper we propose a low cost methodology to coarsely estimate the range of some important parameters of ECG using Photoplethysmography (PPG). PPG is easy to measure (even with a smart phone) and strongly related to human cardio-vascular system. The proposed methodology extracts a set of time domain features from PPG signal. A statistical analysis is performed to select the most relevant set of PPG features for the ECG parameters. Training model for the ECG parameters are created based on those selected features. Both artificial neural network and support vector machine based supervised learning approach is used for performance comparison. Experimental results, performed on benchmark dataset shows that good accuracy in the estimation of ECG parameters can be achieved in our proposed methodology. Results also show that the overall performance improves in using feature selection technique rather than using all the PPG features for classification.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: An experimental evaluation of communication and security protocols that can be used in inhome sleep monitoring and health care and the most suitable protocol in terms of security and overhead is highlighted.
Abstract: Sleep disorders, such as insomnia can seriously affect a patient's quality of life. Sleep measurements based on polysomnographic (PSG) signals and patients' questionnaires are necessary for an accurate evaluation of insomnia. Due to recent innovations in technology, it is now possible to continuously monitor a patient's sleep at home and have their sleep data sent to a remote clinical back-end system for collection and assessment. Most of the research on sleep reported in the literature mainly looks into how to automate the analysis of the sleep data and does not address the problem of the efficient and secure transmissions of the collected health data. This paper provides an experimental evaluation of communication and security protocols that can be used in inhome sleep monitoring and health care and highlights the most suitable protocol in terms of security and overhead. Design guidelines are then derived for the deployment of effective inhome patients monitoring systems.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A predictive model of short-term glucose homeostasis relying on machine learning is presented with the aim of preventing hypoglycemic events and prolonged hyperglycemia on a daily basis and data mining approaches are proposed as a tool for explaining and predicting the long- term glucose control and the incidence of diabetic complications.
Abstract: Chronic care of diabetes comes with large amounts of data concerning the self- and clinical management of the disease. In this paper, we propose to treat that information from two different perspectives. Firstly, a predictive model of short-term glucose homeostasis relying on machine learning is presented with the aim of preventing hypoglycemic events and prolonged hyperglycemia on a daily basis. Second, data mining approaches are proposed as a tool for explaining and predicting the long-term glucose control and the incidence of diabetic complications.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A software architecture to create and maintain updated a Genomic and Proteomic Data Warehouse (GPDW), which integrates several of the main of such dispersed data and uses a modular and multi-level global data schema based on abstraction and generalization of integrated data features.
Abstract: Biomedical questions are often complex and address multiple topics simultaneously. Answering them requires the comprehensive evaluation of several different types of data. They are often available, but in distributed and heterogeneous data sources; this hampers their global evaluation. We developed a software architecture to create and maintain updated a Genomic and Proteomic Data Warehouse (GPDW), which integrates several of the main of such dispersed data. It uses a modular and multi-level global data schema based on abstraction and generalization of integrated data features. Such a schema eases integration of data sources evolving in data content, structure and number, and assures provenance tracking of all the integrated data. Thanks to the developed software architecture and adopted data schema, the GPDW has been kept updated easily and progressively extended with additional data types and sources; it is publicly usable at http://www.bioinformatics.dei.polimi.it/GPKB/.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: This article is assessing the stability of a set of well characterized public cancer microarray datasets, using five popular feature selection algorithms in the field of high-throughput genomics data analysis.
Abstract: A major goal of the application of Machine Learning techniques to high-throughput genomics data (e.g. DNA microarrays or RNA-Seq), is the identification of “gene signatures”. These signatures can be used to discriminate among healthy or disease states (e.g. normal vs cancerous tissue) or among different biological mechanisms, at the gene expression level. Thus, the literature is plenty of studies, where numerous feature selection techniques are applied, in an effort to reduce the noise and dimensionality of such datasets. However, little attention is given to the stability of these signatures, in cases where the original dataset is perturbed by adding, removing or simply resampling the original observations. In this article, we are assessing the stability of a set of well characterized public cancer microarray datasets, using five popular feature selection algorithms in the field of high-throughput genomics data analysis.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: In this study, numerically assess detuning issues for an ingestible antenna which is designed to operate in the Medical Device Radiocommunications Service (MedRadio, 401-406 MHz), as it travels along the gastrointestinal (GI) tract.
Abstract: In this study, we numerically assess detuning issues for an ingestible antenna which is designed to operate in the Medical Device Radiocommunications Service (MedRadio, 401-406 MHz), as it travels along the gastrointestinal (GI) tract. For this purpose, we evaluate the antenna resonance performance within four canonical single-tissue models of the human esophagus, stomach, small and large intestine. The antenna is further placed at different locations within the aforementioned tissue models in order to assess detuning issues related to its relative positioning within each of them. Inherent detuning issues are observed and discussed in the four different simplified tissue models considering three specific locations of the antenna in each model, resulting in twelve different scenarios. The resonance, radiation and safety performance of the ingestible antenna is, finally, evaluated.