scispace - formally typeset
Search or ask a question

Showing papers in "Knowledge Based Systems in 2010"


Journal ArticleDOI
TL;DR: Trainings on communication skills and problem-solving abilities would result in positive interaction for patients to trust medical staff and satisfaction would be increased when the trusted medical staff provides professional competence of health care to patients.
Abstract: Since National Health Insurance program formally went into effect in March 1995 in Taiwan, the residents enjoy high quality but relatively cheaper medical care compared with the most developed countries. To manage a hospital successfully, the important goals are to attract and then retain as many patients as possible by meeting potential demands of various kinds of the patients. This study first conducted the survey based on SERVQUAL model to identify seven major criteria from patients' or their families' viewpoints at Show Chwan Memorial Hospital in Changhua City, Taiwan. When the key criteria were found, the second survey developed for applying decision-making trial and evaluation laboratory (DEMATEL) method was issued to the hospital management by evaluating the importance of criteria and constructing the causal relations among the criteria. The results show that trusted medical staff with professional competence of health care is the most important criterion and mutually affects service personnel with good communication skills, service personnel with immediate problem-solving abilities, detailed description of the patient's condition by the medical doctor, and medical staff with professional abilities. Therefore, trainings on communication skills and problem-solving abilities would result in positive interaction for patients to trust medical staff. When the trusted medical staff provides professional competence of health care to patients, satisfaction would be increased.

522 citations


Journal ArticleDOI
Guiwu Wei1
TL;DR: An optimization model based on the basic ideal of traditional grey relational analysis (GRA) method is established, by which the attribute weights can be determined, and an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.
Abstract: The aim of this paper is to investigate the multiple attribute decision-making problems with intuitionistic fuzzy information, in which the information about attribute weights is incompletely known, and the attribute values take the form of intuitionistic fuzzy numbers. In order to get the weight vector of the attribute, we establish an optimization model based on the basic ideal of traditional grey relational analysis (GRA) method, by which the attribute weights can be determined. Then, based on the traditional GRA method, calculation steps for solving intuitionistic fuzzy multiple attribute decision-making problems with incompletely known weight information are given. The degree of grey relation between every alternative and positive-ideal solution and negative-ideal solution are calculated. Then, a relative relational degree is defined to determine the ranking order of all alternatives by calculating the degree of grey relation to both the positive-ideal solution (PIS) and negative-ideal solution (NIS) simultaneously. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

407 citations


Journal ArticleDOI
TL;DR: Results show that the proposed approach outperforms all previous methods, so it can be considered as a suitable tool for stock price forecasting problems.
Abstract: Stock market prediction is regarded as a challenging task in financial time-series forecasting The central idea to successful stock market prediction is achieving best results using minimum required input data and the least complex stock market model To achieve these purposes this article presents an integrated approach based on genetic fuzzy systems (GFS) and artificial neural networks (ANN) for constructing a stock price forecasting expert system At first, we use stepwise regression analysis (SRA) to determine factors which have most influence on stock prices At the next stage we divide our raw data into k clusters by means of self-organizing map (SOM) neural networks Finally, all clusters will be fed into independent GFS models with the ability of rule base extraction and data base tuning We evaluate capability of the proposed approach by applying it on stock price data gathered from IT and Airlines sectors, and compare the outcomes with previous stock price forecasting methods using mean absolute percentage error (MAPE) Results show that the proposed approach outperforms all previous methods, so it can be considered as a suitable tool for stock price forecasting problems

325 citations


Journal ArticleDOI
TL;DR: A consensus model to help experts in all phases of the consensus reaching process in group decision-making problems in an unbalanced fuzzy linguistic context with incomplete information is presented and it allows to achieve consistent solutions with a great level of agreement.
Abstract: To solve group decision-making problems we have to take in account different aspects. On the one hand, depending on the problem, we can deal with different types of information. In this way, most group decision-making problems based on linguistic approaches use symmetrically and uniformly distributed linguistic term sets to express experts' opinions. However, there exist problems whose assessments need to be represented by means of unbalanced linguistic term sets, i.e., using term sets which are not uniformly and symmetrically distributed. On the other hand, there may be cases in which experts do not have an in-depth knowledge of the problem to be solved. In such cases, experts may not put their opinion forward about certain aspects of the problem and, as a result, they may present incomplete information. The aim of this paper is to present a consensus model to help experts in all phases of the consensus reaching process in group decision-making problems in an unbalanced fuzzy linguistic context with incomplete information. As part of this consensus model, we propose an iterative procedure using consistency measures to estimate the incomplete information. In addition, the consistency measures are used together with consensus measures to guided the consensus model. The main novelty of this consensus model is that it supports the management of incomplete unbalanced fuzzy linguistic information and it allows to achieve consistent solutions with a great level of agreement.

289 citations


Journal ArticleDOI
TL;DR: A new metric is presented which combines the numerical information of the votes with independent information from those values, based on the proportions of the common and uncommon votes between each pair of users, which is superior to the traditional levels of accuracy.
Abstract: Recommender systems are typically provided as Web 20 services and are part of the range of applications that give support to large-scale social networks, enabling on-line recommendations to be made based on the use of networked databases The operating core of recommender systems is based on the collaborative filtering stage, which, in current user to user recommender processes, usually uses the Pearson correlation metric In this paper, we present a new metric which combines the numerical information of the votes with independent information from those values, based on the proportions of the common and uncommon votes between each pair of users Likewise, we define the reasoning and experiments on which the design of the metric is based and the restriction of being applied to recommender systems where the possible range of votes is not greater than 5 In order to demonstrate the superior nature of the proposed metric, we provide the comparative results of a set of experiments based on the MovieLens, FilmAffinity and NetFlix databases In addition to the traditional levels of accuracy, results are also provided on the metrics' coverage, the percentage of hits obtained and the precision/recall

277 citations


Journal ArticleDOI
TL;DR: This paper presents a new fuzzy linguistic recommender system that facilitates the acquisition of the user preferences to characterize the user profiles and includes tools to manage incomplete information when the users express their preferences.
Abstract: As in the Web, the growing of information is the main problem of the academic digital libraries. Thus, similar tools could be applied in university digital libraries to facilitate the information access by the students and teachers. In [46] we presented a fuzzy linguistic recommender system to advice research resources in university digital libraries. The problem of this system is that the user profiles are provided directly by the own users and the process for acquiring user preferences is quite difficult because it requires too much user effort. In this paper we present a new fuzzy linguistic recommender system that facilitates the acquisition of the user preferences to characterize the user profiles. We allow users to provide their preferences by means of incomplete fuzzy linguistic preference relation. We include tools to manage incomplete information when the users express their preferences, and, in such a way, we show that the acquisition of the user profiles is improved.

207 citations


Journal ArticleDOI
TL;DR: A novel correlation based memetic framework (MA-C) which is a combination of genetic algorithm (GA) and local search (LS) using correlation based filter ranking is proposed in this paper and outperforms recent existing methods in the literature in terms of classification accuracy, selected feature size and efficiency.
Abstract: A novel correlation based memetic framework (MA-C) which is a combination of genetic algorithm (GA) and local search (LS) using correlation based filter ranking is proposed in this paper. The local filter method used here fine-tunes the population of GA solutions by adding or deleting features based on Symmetrical Uncertainty (SU) measure. The focus here is on filter methods that are able to assess the goodness or ranking of the individual features. Empirical study of MA-C on several commonly used datasets from the large-scale Gene expression datasets indicates that it outperforms recent existing methods in the literature in terms of classification accuracy, selected feature size and efficiency. Further, we also investigate the balance between local and genetic search to maximize the search quality and efficiency of MA-C.

187 citations


Journal ArticleDOI
TL;DR: An online ensemble of classifiers that combines an incremental version of Naive Bayes, the 1-NN and the WINNOW algorithms using the voting methodology is proposed and it was found that the proposed algorithm is the most appropriate to be used for the construction of a software support tool.
Abstract: The ability to predict a student's performance could be useful in a great number of different ways associated with university-level distance learning. Students' marks in a few written assignments can constitute the training set for a supervised machine learning algorithm. Along with the explosive increase of data and information, incremental learning ability has become more and more important for machine learning approaches. The online algorithms try to forget irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Nowadays, combining classifiers is proposed as a new direction for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. Therefore a better proposal is an online ensemble of classifiers that combines an incremental version of Naive Bayes, the 1-NN and the WINNOW algorithms using the voting methodology. Among other significant conclusions it was found that the proposed algorithm is the most appropriate to be used for the construction of a software support tool.

180 citations


Journal ArticleDOI
TL;DR: Based on five trust networks obtained from the real online sites, it is verified that the trust network is the small-world network: the nodes are highly clustered, while the distance between two randomly selected nodes is short.
Abstract: The trust network is a social network where nodes are inter-linked by their trust relations. It has been widely used in various applications, however, little is known about its structure due to its highly dynamic nature. Based on five trust networks obtained from the real online sites, we contribute to verify that the trust network is the small-world network: the nodes are highly clustered, while the distance between two randomly selected nodes is short. This has considerable implications on using the trust network in the trust-aware applications. We choose the trust-aware recommender system as an example of such applications and demonstrate its advantages by making use of our verified small-world nature of the trust network.

168 citations


Journal ArticleDOI
TL;DR: This paper proposes a new technique called maximum dependency attributes (MDA) for selecting clustering attribute for categorical data based on rough set theory by taking into account the dependency of attributes of the database.
Abstract: A few of clustering techniques for categorical data exist to group objects having similar characteristics. Some are able to handle uncertainty in the clustering process while others have stability issues. However, the performance of these techniques is an issue due to low accuracy and high computational complexity. This paper proposes a new technique called maximum dependency attributes (MDA) for selecting clustering attribute. The proposed approach is based on rough set theory by taking into account the dependency of attributes of the database. We analyze and compare the performance of MDA technique with the bi-clustering, total roughness (TR) and min-min roughness (MMR) techniques based on four test cases. The results establish the better performance of the proposed approach.

163 citations


Journal ArticleDOI
TL;DR: Real applications indicate that the presented FMP model and the Decider software are able to effectively handle fuzziness in both subjective and objective information and support group decision-making under multi-level criteria with a higher level of satisfaction by decision makers.
Abstract: Multi-criteria group decision making (MCGDM) aims to support preference-based decision over the available alternatives that are characterized by multiple criteria in a group. To increase the level of overall satisfaction for the final decision across the group and deal with uncertainty in decision process, a fuzzy MCGDM process (FMP) model is established in this study. This FMP model can also aggregate both subjective and objective information under multi-level hierarchies of criteria and evaluators. Based on the FMP model, a fuzzy MCGDM decision support system (called Decider) is developed, which can handle information expressed in linguistic terms, boolean values, as well as numeric values to assess and rank a set of alternatives within a group of decision makers. Real applications indicate that the presented FMP model and the Decider software are able to effectively handle fuzziness in both subjective and objective information and support group decision-making under multi-level criteria with a higher level of satisfaction by decision makers.

Journal ArticleDOI
TL;DR: The proposed approach and a new method for interval comparison based on DST allow us to solve multiple criteria decision making problem without intermediate defuzzification when not only criteria, but their weights are intuitionistic fuzzy values.
Abstract: This paper presents a new interpretation of intuitionistic fuzzy sets in the framework of the Dempster-Shafer theory of evidence (DST). This interpretation makes it possible to represent all mathematical operations on intuitionistic fuzzy values as the operations on belief intervals. Such approach allows us to use directly the Dempster's rule of combination to aggregate local criteria presented by intuitionistic fuzzy values in the decision making problem. The usefulness of the developed method is illustrated with the known example of multiple criteria decision making problem. The proposed approach and a new method for interval comparison based on DST, allow us to solve multiple criteria decision making problem without intermediate defuzzification when not only criteria, but their weights are intuitionistic fuzzy values.

Journal ArticleDOI
TL;DR: For two universal sets U and V, the concept of solitary set is defined for any binary relation from U to V and the further properties that are interesting and valuable in the theory of rough sets are studied.
Abstract: For two universal sets U and V, we define the concept of solitary set for any binary relation from U to V. Through the solitary sets, we study the further properties that are interesting and valuable in the theory of rough sets. As an application of crisp rough set models in two universal sets, we find solutions of the simultaneous Boolean equations by means of rough set methods. We also study the connection between rough set theory and Dempster-Shafer theory of evidence. In particular, we extend some results to arbitrary binary relations on two universal sets, not just serial binary relations. We consider the similar problems in fuzzy environment and give an example of application of fuzzy rough sets in multiple criteria decision making in the case of clothes.

Journal ArticleDOI
TL;DR: This paper proposes a heuristic algorithm to transform size constrained clustering problems into integer linear programming problems and demonstrates that this approach can utilize cluster size constraints and lead to the improvement of clustering accuracy.
Abstract: Data clustering is an important and frequently used unsupervised learning method. Recent research has demonstrated that incorporating instance-level background information to traditional clustering algorithms can increase the clustering performance. In this paper, we extend traditional clustering by introducing additional prior knowledge such as the size of each cluster. We propose a heuristic algorithm to transform size constrained clustering problems into integer linear programming problems. Experiments on both synthetic and UCI datasets demonstrate that our proposed approach can utilize cluster size constraints and lead to the improvement of clustering accuracy.

Journal ArticleDOI
Yuhua Qian1, Jiye Liang1, Deyu Li1, Feng Wang1, Nannan Ma1 
TL;DR: The main objective of this study is to extend a kind of attribute reductions called a lower approximation reduct and an upper approximation reduction, which preserve the lower/upper approximation distribution of a target decision.
Abstract: This article deals with approaches to attribute reductions in inconsistent incomplete decision table. The main objective of this study is to extend a kind of attribute reductions called a lower approximation reduct and an upper approximation reduct, which preserve the lower/upper approximation distribution of a target decision. Several judgement theorems of a lower/upper approximation consistent set in inconsistent incomplete decision table are educed. Then, the discernibility matrices associated with the two approximation reductions are examined as well, from which we can obtain approaches to attribute reduction of an incomplete decision table in rough set theory.

Journal ArticleDOI
TL;DR: The computational results show that the proposed EM for scheduling the flow shop problem that minimizes the makespan and total weighted tardiness and considers transportation times between machines and stage skipping outperforms SA and other foregoing heuristics applied to this paper.
Abstract: This paper presents an efficient meta-heuristic algorithm based on electromagnetism-like mechanism (EM), in which has been successfully implemented in a few combinatorial problems. We propose the EM for scheduling the flow shop problem that minimizes the makespan and total weighted tardiness and considers transportation times between machines and stage skipping (i.e., some jobs may not need to be processed on all the machines). To show the efficiency of this proposed algorithm, we also apply simulated annealing (SA) and some other well-recognized constructive heuristics, such as SPT, NEH, (g/2, g/2) Johnson' rule, EWDD, SLACK, and NEH_EWDD for the given problems. To evaluate the performance and robustness of our proposed EM, we experiment a number of test problems. Our computational results show that our proposed EM in almost all cases outperforms SA and other foregoing heuristics applied to this paper.

Journal ArticleDOI
TL;DR: A new method called Maximum Capturing is proposed for document clustering based on frequency sensitive competitive learning and has better performances than CFWS, CMS, FTC and FIHC methods in clustering.
Abstract: Frequent itemset originates from association rule mining. Recently, it has been applied in text mining such as document categorization, clustering, etc. In this paper, we conduct a study on text clustering using frequent itemsets. The main contribution of this paper is three manifolds. First, we present a review on existing methods of document clustering using frequent patterns. Second, a new method called Maximum Capturing is proposed for document clustering. Maximum Capturing includes two procedures: constructing document clusters and assigning cluster topics. We develop three versions of Maximum Capturing based on three similarity measures. We propose a normalization process based on frequency sensitive competitive learning for Maximum Capturing to merge cluster candidates into predefined number of clusters. Third, experiments are carried out to evaluate the proposed method in comparison with CFWS, CMS, FTC and FIHC methods. Experiment results show that in clustering, Maximum Capturing has better performances than other methods mentioned above. Particularly, Maximum Capturing with representation using individual words and similarity measure using asymmetrical binary similarity achieves the best performance. Moreover, topics produced by Maximum Capturing distinguished clusters from each other and can be used as labels of document clusters.

Journal ArticleDOI
TL;DR: A decision support system (DSS) based on fuzzy information axiom (FIA) is developed in order to make this decision procedure easy and to help the decision makers to solve their decision problems by modifying data-base of the program.
Abstract: Information axiom, one of two axioms of axiomatic design methodology which is proposed to improve a design, is used to select the best design among proposed designs. In the literature, there are a lot of studies related to using of information axiom for the solution of decision making problems. Moreover, applications of information axiom have been increasing day by day. However, calculation procedure of information axiom is not only incommodious but also difficult for decision makers. In this paper, a decision support system (DSS) based on fuzzy information axiom (FIA) is developed in order to make this decision procedure easy. The developed system consists of a knowledge base module including facts and rules, inference engine module including FIA and aggregation method, and a user interface module including entrance windows. The main aim of this study is to present a DSS tool to help the decision makers to solve their decision problems by modifying data-base of the program. In this paper, an application procedure will be presented based on the optimal selection of location for emergency service to illustrate the implementation procedure of the proposed model.

Journal ArticleDOI
TL;DR: The results show that the method of training SVM using CPSO is feasible, the proposed fault classification model outperforms the neural network trained by chaos particle swarm optimization and least squares support vector machine, and the precision and reliability of the fault classification results can meet the requirement of practical application.
Abstract: A novel method of training support vector machine (SVM) by using chaos particle swarm optimization (CPSO) is proposed. A multi-fault classification model based on the SVM trained by CPSO is established and applied to the fault diagnosis of rotating machines. The results show that the method of training SVM using CPSO is feasible, the proposed fault classification model outperforms the neural network trained by chaos particle swarm optimization and least squares support vector machine, the precision and reliability of the fault classification results can meet the requirement of practical application.

Journal ArticleDOI
TL;DR: This paper mainly discusses the relation between concept lattice reduction and rough set reduction based on classical formal context, which will be meaningful for the relation research between these two theories, and for their knowledge discovery.
Abstract: One of the key problems of knowledge discovery is knowledge reduction. Rough set theory and the theory of concept lattices are two efficient tools for knowledge discovery. Attribute reduction based on rough set theory and the theory of concept lattices both have been researched. Since an information system, the data description of rough set theory, and a formal context, the data description of concept lattice theory, can be taken as the other one, the attribute reduction based on the same data base can be studied from these two perspectives, and researching their relation is significant. This paper mainly discusses the relation between concept lattice reduction and rough set reduction based on classical formal context, which will be meaningful for the relation research between these two theories, and for their knowledge discovery.

Journal ArticleDOI
TL;DR: The graph representation offers the advantage that it allows for a much more expressive document encoding than the more standard bag of words/phrases approach, and consequently gives an improved classification accuracy.
Abstract: A graph-based approach to document classification is described in this paper. The graph representation offers the advantage that it allows for a much more expressive document encoding than the more standard bag of words/phrases approach, and consequently gives an improved classification accuracy. Document sets are represented as graph sets to which a weighted graph mining algorithm is applied to extract frequent subgraphs, which are then further processed to produce feature vectors (one per document) for classification. Weighted subgraph mining is used to ensure classification effectiveness and computational efficiency; only the most significant subgraphs are extracted. The approach is validated and evaluated using several popular classification algorithms together with a real world textual data set. The results demonstrate that the approach can outperform existing text classification algorithms on some dataset. When the size of dataset increased, further processing on extracted frequent features is essential.

Journal ArticleDOI
TL;DR: A Boolean approach to calculating all reducts of a context is formulated via the use of discernibility function and all attributes are classified into three types by their significance in constructing the concept lattice.
Abstract: This paper investigates approaches to attribute reduction in concept lattices induced by axialities. Based on an axiality, a type of covariant Galois connection between power sets, or equivalently a binary relation between the ground sets, the lattice of all concepts associated with a formal context is studied. Some judgment theorems for attribute reduction in such a lattice are proposed and proved. Extended from the idea of knowledge reduction in rough set theory, a Boolean approach to calculating all reducts of a context is formulated via the use of discernibility function. Finally, all attributes are classified into three types by their significance in constructing the concept lattice. The characteristics of these types of attributes are also analyzed.

Journal ArticleDOI
TL;DR: This paper describes a supervised classification approach that is designed to identify and recommend the most helpful product reviews and shows that this approach achieves a statistically significant improvement over alternative review ranking schemes.
Abstract: Many online stores encourage their users to submit product or service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the most-helpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

Journal ArticleDOI
TL;DR: The GMDH outperformed all the techniques with or without feature selection in terms of average accuracy, average sensitivity and average specificity, and the results are much better than those reported in previous studies on the same datasets.
Abstract: This paper presents three hitherto unused neural network architectures for bankruptcy prediction in banks. These networks are Group Method of Data Handling (GMDH), Counter Propagation Neural Network (CPNN) and fuzzy Adaptive Resonance Theory Map (fuzzy ARTMAP). Efficacy of each of these techniques is tested by using four different datasets pertaining to Spanish banks, Turkish banks, UK banks and US banks. Further t-statistic, f-statistic and GMDH are used for feature selection purpose and the features so selected are fed as input to GMDH, CPNN and fuzzy ARTMAP for classification purpose. In each of these cases, top five features are selected in the case of Spanish dataset and top seven features are selected in the case of Turkish and UK datasets. It is observed that the features selected by t-statistic and f-statistic are identical in all datasets. Further, there is a good overlap in the features selected by t-statistic and GMDH. The performance of these hybrids is compared with that of GMDH, CPNN and fuzzy ARTMAP in their stand-alone mode without feature selection. Ten-fold cross validation is performed throughout the study. Results indicate that the GMDH outperformed all the techniques with or without feature selection. Furthermore, the results are much better than those reported in previous studies on the same datasets in terms of average accuracy, average sensitivity and average specificity.

Journal ArticleDOI
TL;DR: Attribute reduction in fuzzy concept lattices based on the kind of transitive regular implication operator (T implication in short) is introduced and investigated.
Abstract: Attribute reduction in fuzzy concept lattices based on the kind of transitive regular implication operator (T implication in short) is introduced and investigated. We first propose a kind of fuzzy concept lattices by using T implication in a fuzzy formal context at a level @d@?I"T@?[0,1] with respect to T implication. Then we introduce the notion of @d-reducts in a fuzzy formal context and give some equivalent characterizations of @d-consistent sets to determine @d-reducts. Based on @d-reducts, we divide attributes into three types (at a level @d) and establish some characterization theorems to determine the type of an attribute.

Journal ArticleDOI
TL;DR: This study proposes a GA-based algorithm used to build an associative classifier that can discover trading rules from these numerical indicators and shows that the proposed approach is an effective classification technique with high prediction accuracy and is highly competitive when compared with the data distribution method.
Abstract: Associative classifiers are a classification system based on associative classification rules. Although associative classification is more accurate than a traditional classification approach, it cannot handle numerical data and its relationships. Therefore, an ongoing research problem is how to build associative classifiers from numerical data. In this work, we focus on stock trading data with many numerical technical indicators, and the classification problem is finding sell and buy signals from the technical indicators. This study proposes a GA-based algorithm used to build an associative classifier that can discover trading rules from these numerical indicators. The experiment results show that the proposed approach is an effective classification technique with high prediction accuracy and is highly competitive when compared with the data distribution method.

Journal ArticleDOI
TL;DR: The rough set model over dual-universes denoted as RSMDU is built through inspecting the relation between the two universes and it is demonstrated that the existing models of rough set are special cases of R SMDU and that the set of conditional attribute and theSet of decision attribute can be regarded as dual- universes in decision-making system, where the model can be utilized to handle decision processing.
Abstract: To tackle the problem of rough set on single-universe, we discuss the rough set model over dual-universes in aspect of building connection between single-universe model and dual-universes model. The rough set model over dual-universes denoted as RSMDU in this paper is built through inspecting the relation between the two universes. Firstly, we propose the RSMDU and study its property using character function and relation matrix. The algorithm for obtaining the lower and upper approximations is then presented. Secondly, we show that Pawlak rough set model can be induced using RSMDU. The theorem inferring the connection between Pawlak model induced by RSMDU and RSMDU is presented. Finally, the applications of RSMDU are studied. According to proposed model, we demonstrate that the existing models of rough set are special cases of RSMDU and that the set of conditional attribute and the set of decision attribute can be regarded as dual-universes in decision-making system, where the model can be utilized to handle decision processing.

Journal ArticleDOI
TL;DR: The main advantages of the proposed method are: (1) it reduces the error reinforcement by using relative neighborhood graph for classification in the initial stages of semi-supervised learning; (2) it introduces a label modification mechanism for better classification performance.
Abstract: In this paper, we propose a novel semi-supervised learning approach based on nearest neighbor rule and cut edges. In the first step of our approach, a relative neighborhood graph based on all training samples is constructed for each unlabeled sample, and the unlabeled samples whose edges are all connected to training samples from the same class are labeled. These newly labeled samples are then added into the training samples. In the second step, standard self-training algorithm using nearest neighbor rule is applied for classification until a predetermined stopping criterion is met. In the third step, a statistical test is applied for label modification, and in the last step, the remaining unlabeled samples are classified using standard nearest neighbor rule. The main advantages of the proposed method are: (1) it reduces the error reinforcement by using relative neighborhood graph for classification in the initial stages of semi-supervised learning; (2) it introduces a label modification mechanism for better classification performance. Experimental results show the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: This research will propose a decision model that will support the researchers in forecasting and scenario analysis fields and will be implemented in a case study for Turkey.
Abstract: This paper provides a general overview of creating scenarios for energy policies using Bayesian Network (BN) models. BN is a useful tool to analyze the complex structures, which allows observation of the current structure and basic consequences of any strategic change. This research will propose a decision model that will support the researchers in forecasting and scenario analysis fields. The proposed model will be implemented in a case study for Turkey. The choice of the case is based on complexities of a renewable energy resource rich country. Turkey is a heavy energy importer discussing new investments. Domestic resources could be evaluated under different scenarios aiming the sustainability. Achievements of this study will open a new vision for the decision makers in energy sector.

Journal ArticleDOI
TL;DR: This work proposes dynamic Adaboost learning with feature selection based on parallel genetic algorithm for image annotation in MPEG-7 standard and investigates two methods of GA feature selection: a binary-coded chromosomeGA feature selection method used to perform optimal feature subset selection and corresponding optimal weight subset selection.
Abstract: Image annotation can be formulated as a classification problem. Recently, Adaboost learning with feature selection has been used for creating an accurate ensemble classifier. We propose dynamic Adaboost learning with feature selection based on parallel genetic algorithm for image annotation in MPEG-7 standard. In each iteration of Adaboost learning, genetic algorithm (GA) is used to dynamically generate and optimize a set of feature subsets on which the weak classifiers are constructed, so that an ensemble member is selected. We investigate two methods of GA feature selection: a binary-coded chromosome GA feature selection method used to perform optimal feature subset selection, and a bi-coded chromosome GA feature selection method used to perform optimal-weighted feature subset selection, i.e. simultaneously perform optimal feature subset selection and corresponding optimal weight subset selection. To improve the computational efficiency of our approach, master-slave GA, a parallel program of GA, is implemented. k-nearest neighbor classifier is used as the base classifier. The experiments are performed over 2000 classified Corel images to validate the performance of the approaches.