scispace - formally typeset
Search or ask a question

Showing papers by "Ahmad Taher Azar published in 2013"


Journal ArticleDOI
TL;DR: A decision support tool for the detection of breast cancer based on three types of decision tree classifiers with the best performance in terms of sensitivity, and SDT was the best only considering speed.
Abstract: Decision support systems help physicians and also play an important role in medical decision-making. They are based on different models, and the best of them are providing an explanation together with an accurate, reliable and quick response. This paper presents a decision support tool for the detection of breast cancer based on three types of decision tree classifiers. They are single decision tree (SDT), boosted decision tree (BDT) and decision tree forest (DTF). Decision tree classification provides a rapid and effective method of categorizing data sets. Decision-making is performed in two stages: training the classifiers with features from Wisconsin breast cancer data set, and then testing. The performance of the proposed structure is evaluated in terms of accuracy, sensitivity, specificity, confusion matrix and receiver operating characteristic (ROC) curves. The results showed that the overall accuracies of SDT and BDT in the training phase achieved 97.07 % with 429 correct classifications and 98.83 % with 437 correct classifications, respectively. BDT performed better than SDT for all performance indices than SDT. Value of ROC and Matthews correlation coefficient (MCC) for BDT in the training phase achieved 0.99971 and 0.9746, respectively, which was superior to SDT classifier. During validation phase, DTF achieved 97.51 %, which was superior to SDT (95.75 %) and BDT (97.07 %) classifiers. Value of ROC and MCC for DTF achieved 0.99382 and 0.9462, respectively. BDT showed the best performance in terms of sensitivity, and SDT was the best only considering speed.

163 citations


Journal ArticleDOI
TL;DR: Three classification algorithms, multi-layer perceptron (MLP), radial basis function (RBF) and probabilistic neural networks (PNN), are applied for the purpose of detection and classification of breast cancer and PNN was the best classifiers by achieving accuracy rates of 100 and 97.66 % in both training and testing phases, respectively.
Abstract: Among cancers, breast cancer causes second most number of deaths in women. To reduce the high number of unnecessary breast biopsies, several computer-aided diagnosis systems have been proposed in the last years. These systems help physicians in their decision to perform a breast biopsy on a suspicious lesion seen in a mammogram or to perform a short-term follow-up examination instead. In clinical diagnosis, the use of artificial intelligent techniques as neural networks has shown great potential in this field. In this paper, three classification algorithms, multi-layer perceptron (MLP), radial basis function (RBF) and probabilistic neural networks (PNN), are applied for the purpose of detection and classification of breast cancer. Decision making is performed in two stages: training the classifiers with features from Wisconsin Breast Cancer database and then testing. The performance of the proposed structure is evaluated in terms of sensitivity, specificity, accuracy and ROC. The results revealed that PNN was the best classifiers by achieving accuracy rates of 100 and 97.66 % in both training and testing phases, respectively. MLP was ranked as the second classifier and was capable of achieving 97.80 and 96.34 % classification accuracy for training and validation phases, respectively, using scaled conjugate gradient learning algorithm. However, RBF performed better than MLP in the training phase, and it has achieved the lowest accuracy in the validation phase.

104 citations


Journal ArticleDOI
TL;DR: Multilayer perceptron (MLP) neural network with fast learning algorithms is used for the accurate prediction of the post-dialysis blood urea concentration and single-pool dialysis dose spKt/V without the need of a detailed description or formulation of the underlying process in contrast to most of the urea kinetic modeling techniques.
Abstract: Measuring the blood urea nitrogen concentration is crucial to evaluate dialysis dose (Kt/V) in patients with renal failure. Although frequent measurement is needed to avoid inadequate dialysis efficiency, artificial intelligence can repeatedly perform the forecasting tasks and may be a satisfactory substitute for laboratory tests. Artificial neural networks represent a promising alternative to classical statistical and mathematical methods to solve multidimensional nonlinear problems. It also represents a promising forecasting application in nephrology. In this study, multilayer perceptron (MLP) neural network with fast learning algorithms is used for the accurate prediction of the post-dialysis blood urea concentration. The capabilities of eight different learning algorithms are studied, and their performances are compared. These algorithms are Levenberg–Marquardt, resilient backpropagation, scaled conjugate gradient, conjugate gradient with Powell–Beale restarts, Polak–Ribiere conjugate gradient and Fletcher–Reeves conjugate gradient algorithms, BFGS quasi-Newton, and one-step secant. The results indicated that BFGS quasi-Newton and Levenberg–Marquardt algorithm produced the best results. Levenberg–Marquardt algorithm outperformed clearly all the other algorithms in the verification phase and was a very robust algorithm in terms of mean absolute error (MAE), root mean square error (RMSE), Pearson’s correlation coefficient (\( R_{p}^{2} \)) and concordance coefficient (RC). The percentage of MAE and RMSE for Levenberg–Marquardt is 0.27 and 0.32 %, respectively, compared to 0.38 and 0.41 % for BFGS quasi-Newton and 0.44 and 0.48 % for resilient backpropagation. MLP-based systems can achieve satisfying results for predicting post-dialysis blood urea concentration and single-pool dialysis dose spKt/V without the need of a detailed description or formulation of the underlying process in contrast to most of the urea kinetic modeling techniques.

87 citations


Journal ArticleDOI
TL;DR: A comparison between hard and fuzzy clustering algorithms for thyroid diseases data set in order to find the optimal number of clusters and some recommendations are formulated to improve determining the actual number of cluster present in the data set.

69 citations


Proceedings Article
07 Nov 2013
TL;DR: The sophisticated hybrid system was proposed in this paper which is capable to segment liver from abdominal CT and detect hepatic lesions automatically and provided good quality results, which could segment liver and extract lesions from abdominalCT in less than 0.15 s/slice.
Abstract: Liver cancer is one of the major death factors in the world. Transplantation and tumor resection are two main therapies in common clinical practice. Both tasks need image assisted planning and quantitative evaluations. An efficient and effective automatic liver segmentation is required for corresponding quantitative evaluations. Computed Tomography (CT) is highly accurate for liver cancer diagnosis. Manual identification of hepatic lesions done by trained physicians is a time-consuming task and can be subjective depending on the skill, expertise and experience of the physician. Computer aided segmentation of CT images would thus be a great step forward to scientific advancement for medical purposes. The sophisticated hybrid system was proposed in this paper which is capable to segment liver from abdominal CT and detect hepatic lesions automatically. The proposed system based on two different datasets and experimental results show that the proposed system robust, fastest and effectively detect the presence of lesions in the liver, count the distinctly identifiable lesions and compute the area of liver affected as tumors lesion, and provided good quality results, which could segment liver and extract lesions from abdominal CT in less than 0.15 s/slice.

47 citations


Journal ArticleDOI
01 Oct 2013
TL;DR: This paper proposes an approach based on the tolerance rough set model, which has the flair to deal with real-valued data whilst simultaneously retaining dataset semantics, and results obtained show an increase in the diagnostic accuracy.
Abstract: Breast cancer is the most common malignant tumor found among young and middle aged women. Feature Selection is a process of selecting most enlightening features from the data set which preserves the original significance of the features following reduction. The traditional rough set method cannot be directly applied to deafening data. This is usually addressed by employing a discretization method, which can result in information loss. This paper proposes an approach based on the tolerance rough set model, which has the flair to deal with real-valued data whilst simultaneously retaining dataset semantics. In this paper, a novel supervised feature selection in mammogram images, using Tolerance Rough Set-PSO based Quick Reduct STRSPSO-QR and Tolerance Rough Set-PSO based Relative Reduct STRSPSO-RR, is proposed. The results obtained using the proposed methods show an increase in the diagnostic accuracy.

46 citations


Proceedings Article
07 Nov 2013
TL;DR: This paper proposes an anomaly detectors generation approach using genetic algorithm in conjunction with several features selection techniques, including principle components analysis, sequential floating, and correlation-based feature selection, and shows that sequential-floating techniques with the genetic algorithm have the best results.
Abstract: Intrusion detection systems have been around for quite some time, to protect systems from inside ad outside threats. Researchers and scientists are concerned on how to enhance the intrusion detection performance, to be able to deal with real-time attacks and detect them fast from quick response. One way to improve performance is to use minimal number of features to define a model in a way that it can be used to accurately discriminate normal from anomalous behaviour. Many feature selection techniques are out there to reduce feature sets or extract new features out of them. In this paper, we propose an anomaly detectors generation approach using genetic algorithm in conjunction with several features selection techniques, including principle components analysis, sequential floating, and correlation-based feature selection. A Genetic algorithm was applied with deterministic crowding niching technique, to generate a set of detectors from a single run. The results show that sequential-floating techniques with the genetic algorithm have the best results, compared to others tested, especially the sequential floating forward selection with detection accuracy 92.86% on the train set and 85.38% on the test set.

46 citations


Journal ArticleDOI
01 Oct 2013
TL;DR: A hybrid system that integrates Rough Set RS and Genetic Algorithm GA is presented for the efficient classification of medical data sets of different sizes and dimensionalities and preserves its place as one of the highest results systems four three different sets.
Abstract: Computational intelligence provides the biomedical domain by a significant support. The application of machine learning techniques in medical applications have been evolved from the physician needs. Screening, medical images, pattern classification, prognosis are some examples of health care support systems. Typically medical data has its own characteristics such as huge size and features, continuous and real attributes that refer to patients' investigations. Therefore, discretization and feature selection process are considered a key issue in improving the extracted knowledge from patients' investigations records. In this paper, a hybrid system that integrates Rough Set RS and Genetic Algorithm GA is presented for the efficient classification of medical data sets of different sizes and dimensionalities. Genetic Algorithm is applied with the aim of reducing the dimension of medical datasets and RS decision rules were used for efficient classification. Furthermore, the proposed system applies the Entropy Gain Information EI for discretization process. Four biomedical data sets are tested by the proposed system EI-GA-RS, and the highest score was obtained through three different datasets. Other different hybrid techniques shared the proposed technique the highest accuracy but the proposed system preserves its place as one of the highest results systems four three different sets. EI as discretization technique also is a common part for the best results in the mentioned datasets while RS as an evaluator realized the best results in three different data sets.

32 citations


Journal ArticleDOI
TL;DR: The results strongly suggest that ANFCLH can aid in the diagnosis of breast cancer and can be very helpful to the physicians for their final decision on their patients.
Abstract: Although adaptive neuro-fuzzy inference system (ANFIS) has very fast convergence time, it is not suitable for classification problems because its outputs are not integer. In order to overcome this problem, this paper provides four adaptive neuro-fuzzy classifiers; adaptive neuro-fuzzy classifier with linguistic hedges (ANFCLH), linguistic hedges neuro-fuzzy classifier with selected features (LHNFCSF), conjugate gradient neuro-fuzzy classifier (SCGNFC) and speeding up scaled conjugate gradient neuro-fuzzy classifier (SSCGNFC). These classifiers are used to achieve very fast, simple and efficient breast cancer diagnosis. Both SCGNFC and SSCGNFC systems are optimized by scaled conjugate gradient algorithms. In these two systems, k-means algorithm is used to initialize the fuzzy rules. Also, Gaussian membership function is only used for fuzzy set descriptions, because of its simple derivative expressions. The other two systems are based on linguistic hedges (LH) tuned by scaled conjugate gradient. The classifiers performances are analyzed and compared by applying them to breast cancer diagnosis. The results indicated that SCGNFC, SSCGNFC and ANFCLH achieved the same accuracy of 97.6608 % in the training phase while LHNFCSF performed better than other methods in the training phase by achieving an accuracy of 100 %. In the testing phase, the overall accuracies of LHNFCSF achieved 97.8038 %, which is superior also to other methods. Applying LHNFCSF not only reduces the dimensions of the problem, but also improves classification performance by discarding redundant, noise-corrupted or unimportant features. Also, the k-means clustering algorithm was used to determine the membership functions of each feature. LHNFCSF achieved mean RMSE values of 0.0439 in the training phase after feature selection and gives the best testing recognition rates of 98.8304 and 98.0469 during training and testing phases, respectively using two clusters for each class. The results strongly suggest that ANFCLH can aid in the diagnosis of breast cancer and can be very helpful to the physicians for their final decision on their patients.

31 citations


Proceedings Article
24 Oct 2013
TL;DR: The important features are identified, consequently reducing the number of features to assess the fetal heart rate and the features are selected by using Unsupervised Particle Swarm Optimization (PSO) based Relative Reduct and are tested by using various measures of diagnostic accuracy.
Abstract: Fetal heart activity is generally monitored using a CardioTocoGraph (CTG) which estimates the fetal tachogram based on the evaluation of ultrasound pulses reflected from the fetal heart. It consists in a simultaneous recording and analysis of Fetal Heart Rate (FHR) signal, uterine contraction activity and fetal movements. Generally cardiotocograph comprises more number of features. This paper aims to identify the important features, consequently reducing the number of features to assess the fetal heart rate. The features are selected by using Unsupervised Particle Swarm Optimization (PSO) based Relative Reduct and are tested by using various measures of diagnostic accuracy.

29 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: A study for the performance of two novel ensemble classifiers namely Random Forest and Rotation Forest for biomedical data sets is tested with five medical datasets and it was observed that ROT achieved the highest classification accuracy in most tested cases.
Abstract: Machine Learning concept offers the biomedical research field a great support. It provides many opportunities for disease discovering and related drugs revealing. The machine learning medical applications had been evolved from the physician needs and motivated by the promising results extracted from empirical studies. Medical support systems can be provided by screening, medical images, pattern classification and microarrays gene expression analysis. Typically medical data is characterized by its huge dimensionality and relatively limited examples. Feature selection is a crucial step to improve classification performance. Recent studies in machine learning field about classification process emerged a novel strong classifier scheme called the ensemble classifier. In this paper, a study for the performance of two novel ensemble classifiers namely Random Forest (RF) and Rotation Forest (ROT) for biomedical data sets is tested with five medical datasets. Three different feature selection methods were used to extract the most relevant features in each data set. Prediction performance is evaluated using accuracy measure. It was observed that ROT achieved the highest classification accuracy in most tested cases.

Book ChapterDOI
01 Jan 2013
TL;DR: It is demonstrated that the proposed method based on Linguistic Hedges Neural-Fuzzy classifier can be used for reducing the dimension of feature space andCan be used to obtain fast automatic diagnostic systems for other diseases.
Abstract: The differential diagnosis of erythemato-squamous diseases is a real challenge in dermatology. In diagnosing of these diseases, a biopsy is vital. However, unfortunately these diseases share many histopathological features, as well. Another difficulty for the differential diagnosis is that one disease may show the features of another disease at the beginning stage and may have the characteristic features at the following stages. In this paper, a new Feature Selection based on Linguistic Hedges Neural-Fuzzy classifier is presented for the diagnosis of erythemato-squamous diseases. The performance evaluation of this system is estimated by using four training-test partition models: 50–50%, 60–40%, 70–30% and 80–20%. The highest classification accuracy of 95.7746% was achieved for 80–20% training-test partition using 3 clusters and 18 fuzzy rules, 93.820% for 50–50% training-test partition using 3 clusters and 18 fuzzy rules, 92.5234% for 70–30% training-test partition using 5 clusters and 30 fuzzy rules, and 91.6084% for 60–40% training-test partition using 6 clusters and 36 fuzzy rules. Therefore, 80–20% training-test partition using 3 clusters and 18 fuzzy rules are the best classification accuracy with RMSE of 6.5139e-013. This research demonstrated that the proposed method can be used for reducing the dimension of feature space and can be used to obtain fast automatic diagnostic systems for other diseases.

Book ChapterDOI
01 Jan 2013
TL;DR: This paper presents an approach for segmenting retinal blood vessels using only ant colony system, which uses eight features; four are based on gray-level and four arebased on Hu moment-invariants.
Abstract: The segmentation of retinal blood vessels in the eye funds images is crucial stage in diagnosing infection of diabetic retinopathy. Traditionally, the vascular network is mapped by hand in a time-consuming process that requires both training and skill. Automating the process allows consistency, and most importantly, frees up the time that a skilled technician or doctor would normally use for manual screening. Several studies were carried out on the segmentation of blood vessels in general, however only a small number of them were associated to retinal blood vessels. In this paper, an approach for segmenting retinal blood vessels is presented using only ant colony system. It uses eight features; four are based on gray-level and four are based on Hu moment-invariants. The features are directly computed from values of image pixels, so they take about 90 s in computation. The performance evaluation of this system is estimated by using classification accuracy. The presented approach accuracy is 90.28 % and its sensitivity is 74 %.

Book ChapterDOI
03 Sep 2013
TL;DR: An IDS is built using Genetic Algorithms and Principal Component Analysis for feature selection, then some classification techniques are applied on the detected anomalies to define their classes, and the results show that J48 mostly give better results than other classifiers, but for certain attacks Naive Bayes give the best results.
Abstract: Malicious users are always trying to intrude the information systems, taking advantage of different system vulnerabilities. As the Internet grows, the security limitations are becoming more crucial, facing such threats. Intrusion Detection Systems (IDS) are a common protecting systems that is used to detect malicious activity from inside and outside users of a system. It is very important to increase detection accuracy rate as possible, and get more information about the detected attacks, as one of the drawbacks of an anomaly IDS is the lack of detected attacks information. In this paper, an IDS is built using Genetic Algorithms (GA) and Principal Component Analysis (PCA) for feature selection, then some classification techniques are applied on the detected anomalies to define their classes. The results show that J48 mostly give better results than other classifiers, but for certain attacks Naive Bayes give the best results.

Book ChapterDOI
01 Jan 2013
TL;DR: The impact of applying discretization on building network IDS is addressed, and the impact of the quality of the classification algorithms when combiningDiscretization with genetic algorithm (GA) as a feature selection method for networkIDS is explored.
Abstract: Intrusion detection systems (IDSs) is an essential key for network defense. Many classification algorithms have been proposed for the design of network IDS. Data preprocessing is a common phase to the classification learning algorithm, which leads to improve the network IDS performance. One of the important data preprocessing steps is discretization, where continuous features are converted into nominal ones. This paper addresses the impact of applying discretization on building network IDS. Furthermore, it explores the impact of the quality of the classification algorithms when combining discretization with genetic algorithm (GA) as a feature selection method for network IDS. In order to evaluate the performance of the introduced network IDS, several classifiers algorithms; rules based classifiers (Ridor, Decision table), trees classifiers (REPTree, C 4.5, Random Forest) and Naive bays classifier are used. Several groups of experiments are conducted and demonstrated on the NSL-KDD dataset. Experiments show that discretization has a positive influence on the time to classify the test instances. Which is an important factor if real time network IDS is desired.

Proceedings Article
07 Nov 2013
TL;DR: Two improvements in previous approach uses ant colony system for automatic segmentation of retinal blood vessels are proposed, one of which is adding new discriminant feature to the features pool used in classification and the other is applying new heuristic function based on probability theory in the ant colonies system.
Abstract: The diabetic retinopathy disease spreads diabetes on the retina vessels thus they lose blood supply that causes blindness in short time, so early detection of diabetes prevents blindness in more than 50% of cases. The early detection can be achieved by automatic segmentation of retinal blood vessels in retinal images which is two-class classification problem. This paper proposes two improvements in previous approach uses ant colony system for automatic segmentation of retinal blood vessels. The first improvement is done by adding new discriminant feature to the features pool used in classification. The second improvement is done by applying new heuristic function based on probability theory in the ant colony system instead of the old that based on Euclidean distance used before. The results of improvements are promising when applying the improved approach on STARE database of retinal images.

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter introduces the fuzzy control approach for a dialysis session, a heuristic strategy based on expert rules, as fuzzy logic control, that can help to reach the desired performances, reducing undesired collateral effects and increasing the potentiality of the Dialysis session.
Abstract: This chapter introduces the fuzzy control approach for a dialysis session. Due to the complexity of the human system, the classical control methods, like PID, can fail to reach the target, mainly for what it concerns the stabilization of the system, which can induce sudden and undesired hypotensive collapses. To this purpose, a heuristic strategy based on expert rules, as fuzzy logic control, can help to reach the desired performances, reducing undesired collateral effects and increasing the potentiality of the dialysis session.

Journal ArticleDOI
TL;DR: This issue marks the first anniversary issue of IEEE TRANSACTIONS ON NEURAL NETWORKS and LEARNING SYSTEMS after it changed its name from IEEE TransACTIONS on NEURal Networks and Learning Systems after it had a great year.
Abstract: This issue marks the first anniversary issue of IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS after it changed its name from IEEE TRANSACTIONS ON NEURAL NETWORKS. I am happy to report that we had a great year! The number of new submissions in a year exceeded 1,000 for the first time in the history of TNN/TNNLS. IEEE TNN had a very successful development for 22 years from 1990 to 2011, and we have good reasons to believe that IEEE TNNLS will have many more years of successful growth.

Book ChapterDOI
01 Jan 2013
TL;DR: In this paper, the four retinal abnormalities (microaneurysms, haemorrhages, exudates, and cotton wool spots) are located in 100 color retinal images, previously graded by an ophthalmologist and a new automatic algorithm has been developed and applied to 100retinal images.
Abstract: Diabetic retinopathy (DR) is the leading cause of blindness in adults around the world today. Early detection (that is, screening) and timely treatment have been shown to prevent visual loss and blindness in patients with retinal complications of diabetes. The basis of the classification of different stages of diabetic retinopathy is the detection and quantification of blood vessels and hemorrhages present in the retinal image. In this paper, the four retinal abnormalities (microaneurysms, haemorrhages, exudates, and cotton wool spots) are located in 100 color retinal images, previously graded by an ophthalmologist. A new automatic algorithm has been developed and applied to 100 retinal images. Accuracy assessment of the classified output revealed the detection rate of the microaneurysms was 87% using the thresholding method, whereas the detection rate for the haemorrhages was 88%. On the other hand, the correct classification rate for microaneurysms and haemorrhages using the minimum distance classifier was 60% and 94% respectively. The thresholding method resulted in a correct detection rate for exudates and cotton wool spots of 93% and 89% respectively. The minimum distance classifier gave a correct rate for exudates and cotton wool spots of 95% and 86% respectively.

Journal ArticleDOI
TL;DR: In this article, an adaptive network based on fuzzy inference system (ANFIS) was proposed for predicting intradialytic (C"i"n"t) and post-dialysis urea concentrations.

Journal ArticleDOI
TL;DR: The results suggest that the neuro-fuzzy technology, based on limited clinical parameters, is an excellent alternative method for accurately predicting arterial and venous urea concentrations in hemodialysis patients.
Abstract: The blood urea concentration has been used as a surrogate marker for toxin elimination in hemodialysed patients, and several indices based on it have been proposed in recent years for monitoring treatment adequacy. Measuring urea nitrogen concentrations at the inlet and outlet of dialyser is crucial to evaluate the in-vivo blood side dialyser urea clearance during hemodialysis. Although frequent measurement is needed to avoid inadequate dialysis efficiency, artificial intelligence can repeatedly perform the forecasting tasks and may be a satisfactory substitute for laboratory tests. Neuro-fuzzy technology represents a promising forecasting application in clinical medicine. In this study, two fuzzy models have been proposed to predict dialyser inlet and outlet urea concentrations in order to estimate dialyser clearance without blood sampling. The model is of multi-input single-output MISO type. Multi-adaptive neuro-fuzzy inference system MANFIS technique of fuzzy-based systems has been employed. The performance of the model is authenticated by evaluating the predicted results with the practical results obtained by conducting the confirmation experiments. The results suggest that the neuro-fuzzy technology, based on limited clinical parameters, is an excellent alternative method for accurately predicting arterial and venous urea concentrations in hemodialysis patients. The proposed model can be used for intelligent online adaptive system.

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter gives an overview of a neuro-fuzzy system design with novel applications in dialysis using an adaptive-network-based fuzzy inference system (ANFIS) for the modeling and predicting important variables in hemodialysis process.
Abstract: Soft computing techniques are known for their efficiency in dealing with complicated problems when conventional analytical methods are infeasible or too expensive, with only sets of operational data available. Its principal constituents are fuzzy logic, Artificial Neural Network (ANN) and evolutional computing, such as genetic algorithm. Neuro-fuzzy controllers constitute a class of hybrid soft computing techniques that use fuzzy logic and artificial neural networks. The advantages of a combination of ANN and Fuzzy Inference system (FIS) are obvious. There are several approaches to integrate ANN and FIS and very often it depends on the application. This chapter gives an overview of a neuro-fuzzy system design with novel applications in dialysis using an adaptive-network-based fuzzy inference system (ANFIS) for the modeling and predicting important variables in hemodialysis process.

Journal ArticleDOI
TL;DR: A novel method, Adaptive Neuro–Fuzzy Inference System (ANFIS) to predict the post–dialysis blood urea concentration is proposed and a comparative analysis suggests that the proposed modelling approach outperforms other traditional urea kinetic models (UKM).
Abstract: Dialysis dose (Kt/V) is mostly dependent on dialysis kinetic variables such as pre–dialysis and post–dialysis blood urea nitrogen concentration (Cpost), ultrafiltration (UF) volume, duration of the dialysis procedure, and urea distribution volume. Therefore, post–dialysis blood urea concentration is used to assess the dialysis efficiency. It gradually decreases to about 30% of the pre–dialysis value depending on the urea clearance rate during the period of dialysis. If the urea removal is inadequate, then dialysis is inadequate. This paper proposes a novel method, Adaptive Neuro–Fuzzy Inference System (ANFIS) to predict the post–dialysis blood urea concentration. The advantage of this neuro–fuzzy hybrid approach is that it does not require the model structure to be known a priori, in contrast to most of the urea kinetic modelling techniques. The accuracy of the ANFIS was prospectively compared with other traditional methods for predicting single pool dialysis dose (spKt/V). The results are highly promising, and a comparative analysis suggests that the proposed modelling approach outperforms other traditional urea kinetic models (UKM).

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter discusses the double pool urea kinetic models and regional blood flow models in order to understand the concept of urea rebound, an important nutritional measure that is clinically monitored in dialysis patients.
Abstract: Urea kinetic modelling (UKM) has been generally accepted as a method for quantifying hemodialysis (HD) treatment. During hemodialysis, reduction in the urea concentration in the intracellular fluid (ICF) compartment will lag behind that in the extra cellular fluid (ECF) compartment, and following the end of dialysis, a ”rebound” in the blood level of urea will occur where it continues to rise due to diffusion of urea from the ICF to ECF to establish an equilibrium state. Because of compartment effects, the dose of dialysis with regard to urea removal is significantly overestimated from immediate post-dialysis urea concentrations, because 30 to 60 min are required for concentration gradients to dissipate and for urea concentrations to equilibrate across body water spaces during the post-dialysis period. To avoid the delay of waiting for an equilibrated post-dialysis sample, it became necessary to describe and to quantitate effects causing the urea compartmentalization during dialysis; two-pool modeling approaches have been developed that more accurately reflect the amount of urea removed. This in turn gives more adequate measures not only of dialysis adequacy, but also of the protein catabolic rate, an important nutritional measure that is clinically monitored in dialysis patients. This chapter discusses the double pool urea kinetic models and regional blood flow models in order to understand the concept of urea rebound.

Book ChapterDOI
01 Jan 2013
TL;DR: In a given treatment modality, the performance characteristics of the dialyzer determine the quantity and nature of uremic toxins removed from the patient’s blood, provided that an adequate treatment time and flow conditions are prescribed.
Abstract: In a given treatment modality, the performance characteristics of the dialyzer determine the quantity and nature of uremic toxins removed from the patient’s blood, provided that an adequate treatment time and flow conditions are prescribed Dialyzer selection may be the most difficult task facing a dialysis facility Practitioners must understand the functions of a dialyzer, membrane biocompatibility, implications of poor technique, financial and quality implications of dialyzer reprocessing, and matching the patient to the dialyzer’s capabilities Dialyzer membranes are a vital contributor to the success or failure of hemodialysis therapies and hemodialysis adequacy Matching a dialyzer to patient requirements is crucial to meet the prescribed clearance goals

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter will provide an overview of the SD modeling and simulation methodology as well as provide more detailed steps related to building and validating SD models.
Abstract: System dynamics (SD) is a powerful simulation method that is ideal for modeling human processes, as well as many processes that occur within the healthcare system The SD approach has been used and validated extensively in a wide array of fields and industries This chapter will provide an overview of the SD modeling and simulation methodology as well as provide more detailed steps related to building and validating SD models Finally, this chapter will show a specific set of SD models related to dialysis, Kidney and Transplant Patients, Hypertension Patient Flow, and Organ Donation and Transplantation

Book ChapterDOI
01 Jan 2013
TL;DR: The aims of this chapter are to give an overview of single pool urea kinetic modeling and to introduce concepts and methods needed to manage the approaches available to estimate the single pool Kt/V.
Abstract: Hemodialysis (HD) is one of the treatments included in what is called the Renal Replacement Therapy (RRT). As every treatment, hemodialysis has its dose. How quantify this hemodialysis dosage was one of the results of the National Dialysis Cooperative Study (NCDS) published in 1983. A formula based in the Urea Kinetic Modeling (UKM) was developed. This formula was the dimensionless equation Kt/V where K is the dialyzer clearance rate of urea (or volume of plasma cleared), t is the duration of the dialysis session and V is the urea distribution volume (the total body water volume). Because of the complexity of urea kinetic modeling, a number of shortcut methods of estimating Kt/V have been proposed. The aims of this chapter are twofold: 1) to give an overview of single pool urea kinetic modeling and 2) to introduce concepts and methods needed to manage the approaches available to estimate the single pool Kt/V.

Book ChapterDOI
01 Jan 2013
TL;DR: A direct relationship between RRF value and survival in dialysis patient is now proved and provides better small and middle molecule removal, improved volemic status and arterial pressure control, and diminished risk of vascular and valvular calcification due to better phosphate removal.
Abstract: Chronic kidney disease is a worldwide public health problem with an increasing incidence and prevalence, poor outcomes, and high cost. Outcomes of chronic kidney disease include not only kidney failure but also complications of decreased kidney function and cardiovascular disease. Current evidence suggests that some of these adverse outcomes can be prevented or delayed by early detection and treatment. Residual renal function among patients with end stage renal disease is clinically important as it contributes to adequacy of dialysis, quality of life, morbidity and mortality. The preservation of residual renal function (RRF) is important after initiating dialysis, as well as in the pre-dialysis period. Longer preservation of RRF provides better small and middle molecule removal, improved volemic status and arterial pressure control, diminished risk of vascular and valvular calcification due to better phosphate removal. Deterioration of RRF results in worsening of anemia, inflammation and malnutrition. A direct relationship between RRF value and survival in dialysis patient. is now proved.

Book ChapterDOI
01 Jan 2013
TL;DR: In this chapter the physiological basis of indicator dilution is briefly summarized with regard to application in hemodialysis considering the limitations as well as the possibilities for integration and automation.
Abstract: Low access blood flow has been recognized as the most important cause for access thrombosis and subsequent access failure so that some form of access flow surveillance is recommended in everyday practice. The classic technique to measure flow in physiology is based on indicator dilution as most flow rates are inaccessible to direct measurement. However, extracorporeal blood purification techniques have been designed for the controlled removal and/or delivery of solutes, all of which can be used as indicators to measure selected transport characteristics throughout the intra- and extracorporeal system. It is therefore not surprising that extracorporeal techniques are extremely well suited for access flow monitoring methods based on indicator dilution, also because these techniques can be integrated into the extracorporeal system as part of the purification process and as these procedures have the potential to be fully automated. In this chapter the physiological basis of indicator dilution is briefly summarized with regard to application in hemodialysis considering the limitations as well as the possibilities for integration and automation.

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter covers the purpose of water purification before it is used for dialysis, the components of a water treatment system, how the system is monitored, and the common contaminants found in water.
Abstract: Drinking water contains chemical, microbiological, and other contaminants. A healthy adult drinks about 10-12 liters of water per week, this water goes across a selective barrier of the gastrointestinal tract, and excess chemicals are removed by the healthy kidney. In contrast, with a typical three times a week hemodialysis protocol, a dialysis patient is exposed to more than 300 liters of water weekly, the water passes through the nonselective dialyzer membrane, and there is no kidney to maintain the normal balance of chemicals. Moreover, the highly permeable high-flux membrane used today increases the risk of increased load of contaminants passing through the membrane and into the blood. Some common contaminants have been shown to be injurious to patients. Thus, the water for dialysis must be purified of these contaminants prior to its use by the proportioning system of the dialysis machine to make the final dialysate. This chapter covers the purpose of water purification before it is used for dialysis. It describes the components of a water treatment system, how the system is monitored, and the common contaminants found in water.