scispace - formally typeset
Search or ask a question
Author

Nicos Maglaveras

Bio: Nicos Maglaveras is an academic researcher from Aristotle University of Thessaloniki. The author has contributed to research in topics: Health care & Telemedicine. The author has an hindex of 29, co-authored 281 publications receiving 5027 citations. Previous affiliations of Nicos Maglaveras include Northwestern University & New York Academy of Sciences.


Papers
More filters
Journal ArticleDOI
TL;DR: A systematic review of the applications of machine learning, data mining techniques and tools in the field of diabetes research with respect to a) Prediction and Diagnosis, b) Diabetic Complications, c) Genetic Background and Environment, and e) Health Care and Management with the first category appearing to be the most popular.
Abstract: The remarkable advances in biotechnology and health sciences have led to a significant production of data, such as high throughput genetic data and clinical information, generated from large Electronic Health Records (EHRs). To this end, application of machine learning and data mining methods in biosciences is presently, more than ever before, vital and indispensable in efforts to transform intelligently all available information into valuable knowledge. Diabetes mellitus (DM) is defined as a group of metabolic disorders exerting significant pressure on human health worldwide. Extensive research in all aspects of diabetes (diagnosis, etiopathophysiology, therapy, etc.) has led to the generation of huge amounts of data. The aim of the present study is to conduct a systematic review of the applications of machine learning, data mining techniques and tools in the field of diabetes research with respect to a) Prediction and Diagnosis, b) Diabetic Complications, c) Genetic Background and Environment, and e) Health Care and Management with the first category appearing to be the most popular. A wide range of machine learning algorithms were employed. In general, 85% of those used were characterized by supervised learning approaches and 15% by unsupervised ones, and more specifically, association rules. Support vector machines (SVM) arise as the most successful and widely used algorithm. Concerning the type of data, clinical datasets were mainly used. The title applications in the selected articles project the usefulness of extracting valuable knowledge leading to new hypotheses targeting deeper understanding and further investigation in DM.

811 citations

Journal ArticleDOI
TL;DR: A hybrid multidimensional image segmentation algorithm is proposed, which combines edge and region-based techniques through the morphological algorithm of watersheds and additionally maintains the so-called nearest neighbor graph, due to which the priority queue size and processing time are drastically reduced.
Abstract: A hybrid multidimensional image segmentation algorithm is proposed, which combines edge and region-based techniques through the morphological algorithm of watersheds. An edge-preserving statistical noise reduction approach is used as a preprocessing stage in order to compute an accurate estimate of the image gradient. Then, an initial partitioning of the image into primitive regions is produced by applying the watershed transform on the image gradient magnitude. This initial segmentation is the input to a computationally efficient hierarchical (bottom-up) region merging process that produces the final segmentation. The latter process uses the region adjacency graph (RAG) representation of the image regions. At each step, the most similar pair of regions is determined (minimum cost RAG edge), the regions are merged and the RAG is updated. Traditionally, the above is implemented by storing all RAG edges in a priority queue. We propose a significantly faster algorithm, which additionally maintains the so-called nearest neighbor graph, due to which the priority queue size and processing time are drastically reduced. The final segmentation provides, due to the RAG, one-pixel wide, closed, and accurately localized contours/surfaces. Experimental results obtained with two-dimensional/three-dimensional (2-D/3-D) magnetic resonance images are presented.

794 citations

Journal ArticleDOI
TL;DR: Clear evidence is presented that eye-blinking statistics are sensitive to the driver's sleepiness and should be considered in the design of an efficient and driver-friendly sleepiness detection countermeasure device.

266 citations

Journal ArticleDOI
01 Oct 1998
TL;DR: A generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques.
Abstract: The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.

240 citations

Journal ArticleDOI
TL;DR: The NLPCA techniques are used to classify each segment into one of two classes: normal and abnormal (ST+, ST-, or artifact) and test results show that using only two nonlinear components and a training set of 1000 normal samples from each file produce a correct classification rate.
Abstract: The detection of ischemic cardiac beats from a patient's electrocardiogram (EGG) signal is based on the characteristics of a specific part of the beat called the ST segment. The correct classification of the beats relies heavily on the efficient and accurate extraction of the ST segment features. An algorithm is developed for this feature extraction based on nonlinear principal component analysis (NLPCA). NLPCA is a method for nonlinear feature extraction that is usually implemented by a multilayer neural network. It has been observed to have better performance, compared with linear principal component analysis (PCA), in complex problems where the relationships between the variables are not linear. In this paper, the NLPCA techniques are used to classify each segment into one of two classes: normal and abnormal (ST+, ST-, or artifact). During the algorithm training phase, only normal patterns are used, and for classification purposes, we use only two nonlinear features for each ST segment. The distribution of these features is modeled using a radial basis function network (RBFN). Test results using the European ST-T database show that using only two nonlinear components and a training set of 1000 normal samples from each file produce a correct classification rate of approximately 80% for the normal beats and higher than 90% for the ischemic beats.

174 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2003

3,093 citations

Journal ArticleDOI
TL;DR: This paper will try to characterize the role that graphs play within the Pattern Recognition field, and presents two taxonomies that include almost all the graph matching algorithms proposed from the late seventies and describes the different classes of algorithms.
Abstract: A recent paper posed the question: "Graph Matching: What are we really talking about?". Far from providing a definite answer to that question, in this paper we will try to characterize the role that graphs play within the Pattern Recognition field. To this aim two taxonomies are presented and discussed. The first includes almost all the graph matching algorithms proposed from the late seventies, and describes the different classes of algorithms. The second taxonomy considers the types of common applications of graph-based techniques in the Pattern Recognition and Machine Vision field.

1,517 citations