scispace - formally typeset
Search or ask a question

Showing papers in "Journal of intelligent systems in 2008"


Journal IssueDOI
TL;DR: This procedure attempts to estimate the missing information in an expert's incomplete preference relation using only the preference values provided by that particular expert using the additive consistency property.
Abstract: In this paper, we present a procedure to estimate missing preference values when dealing with pairwise comparison and heterogeneous information. This procedure attempts to estimate the missing information in an expert's incomplete preference relation using only the preference values provided by that particular expert. Our procedure to estimate missing values can be applied to incomplete fuzzy, multiplicative, interval-valued, and linguistic preference relations. Clearly, it would be desirable to maintain experts' consistency levels. We make use of the additive consistency property to measure the level of consistency and to guide the procedure in the estimation of the missing values. Finally, conditions that guarantee the success of our procedure in the estimation of all the missing values of an incomplete preference relation are given. © 2008 Wiley Periodicals, Inc.

218 citations


Journal IssueDOI
TL;DR: In this article, three forms of bipolarity are laid bare: symmetric univariate, dual bivariate, and asymmetric (or heterogeneous) bipolarity, which can be instrumental in the logical handling of incompleteness and inconsistency.
Abstract: Bipolarity seems to pervade human understanding of information and preference, and bipolar representations look very useful in the development of intelligent technologies. Bipolarity refers to an explicit handling of positive and negative sides of information. Basic notions and background on bipolar representations are provided. Three forms of bipolarity are laid bare: symmetric univariate, dual bivariate, and asymmetric (or heterogeneous) bipolarity. They can be instrumental in the logical handling of incompleteness and inconsistency, rule representation and extraction, argumentation, learning, and decision analysis. © 2008 Wiley Periodicals, Inc.

198 citations


Journal IssueDOI
TL;DR: Several popular similarity measures between fuzzy sets are reviewed and then extended and two new similarity measures are proposed between intuitionistic fuzzy sets, finding them to satisfy some similarity measure axioms.
Abstract: In this paper, we first review several popular similarity measures between fuzzy sets and then extend those similarity measures to intuitionistic fuzzy sets. We also propose two new similarity measures between intuitionistic fuzzy sets. These similarity measures have been found to satisfy some similarity measure axioms. Several numerical experiments are performed to assess the performance of these measures. Numerical results clearly indicate these new measures to be superior in performance to the others. Finally, we apply the new measures to evaluate students' answerscripts. The experimental results show the superiority of the proposed measures for students' evaluation. © 2008 Wiley Periodicals, Inc.

93 citations


Journal IssueDOI
TL;DR: This paper studies in-depth certain properties of interval-valued fuzzy sets and Atanassov's intuitionistic fuzzy sets (A-IFSs) and analyzes the following properties: idempotency, absorption, and distributiveness.
Abstract: In this paper, we study in-depth certain properties of interval-valued fuzzy sets and Atanassov's intuitionistic fuzzy sets (A-IFSs). In particular, we study the manner in which to construct different interval-valued fuzzy connectives (or Atanassov's intuitionistic fuzzy connectives) starting from an operator. We further study the law of contradiction and the law of excluded middle for these sets. Furthermore, we analyze the following properties: idempotency, absorption, and distributiveness. We conclude relating idempotency with the capacity that some of the connectives studied have for maintaining, in certain conditions, the amplitude (or Atanassov's intuitionistic index) of the intervals on which they act. © 2008 Wiley Periodicals, Inc.

88 citations


Journal IssueDOI
Ronald R. Yager1
TL;DR: The objective here is to take a step toward the development of intelligent social network analysis using granular computing by expressing in a human-focused manner concepts associated with social networks then formalize these concepts using fuzzy sets and then evaluate these concepts with respect to social networks that have been represented using set-based relational network theory.
Abstract: An introduction to some basic ideas of graph (relational network) theory is first provided. We then discuss some concepts from granular computing in particular the fuzzy set paradigm of computing with words. The natural connection between graph theory and granular computing, particularly fuzzy set theory, is pointed out. This connection is grounded in the fact that these are both set-based technologies. Our objective here is to take a step toward the development of intelligent social network analysis using granular computing. In particular one can start by expressing in a human-focused manner concepts associated with social networks then formalize these concepts using fuzzy sets and then evaluate these concepts with respect to social networks that have been represented using set-based relational network theory. We capture this approach in what we call the paradigm for intelligent social network analysis, PISNA. Using this paradigm, we provide definitions of a number of concepts related to social networks. © 2008 Wiley Periodicals, Inc.

54 citations


Journal IssueDOI
TL;DR: This paper formally describes the context-based reasoning (CxBR) paradigm, which can be used to represent tactical human behavior in simulations or in the real world.
Abstract: This paper formally describes the context-based reasoning (CxBR) paradigm. CxBR can be used to represent tactical human behavior in simulations or in the real world. In problem solving, the context can be said to inherently contain much knowledge about the situation in which the problem is to be solved and-or the environment in which it must be solved. This paper discusses some of the issues involved in a context-driven representation of human behavior and introduces a formal description of CxBR. © 2008 Wiley Periodicals, Inc.

49 citations


Journal IssueDOI
TL;DR: A modified sampling scheme for use with eNERF is proposed that combines simple random sampling with (parts of) the sampling procedures used by enerF and a related algorithm sVAT (scalable visual assessment of clustering tendency).
Abstract: A key challenge in pattern recognition is how to scale the computational efficiency of clustering algorithms on large data sets. The extension of non-Euclidean relational fuzzy c-means (NERF) clustering to very large (VL = unloadable) relational data is called the extended NERF (eNERF) clustering algorithm, which comprises four phases: (i) finding distinguished features that monitor progressive sampling; (ii) progressively sampling from a N × N relational matrix RN to obtain a n × n sample matrix Rn; (iii) clustering Rn with literal NERF; and (iv) extending the clusters in Rn to the remainder of the relational data. Previously published examples on several fairly small data sets suggest that eNERF is feasible for truly large data sets. However, it seems that phases (i) and (ii), i.e., finding Rn, are not very practical because the sample size n often turns out to be roughly 50p of n, and this over-sampling defeats the whole purpose of eNERF. In this paper, we examine the performance of the sampling scheme of eNERF with respect to different parameters. We propose a modified sampling scheme for use with eNERF that combines simple random sampling with (parts of) the sampling procedures used by eNERF and a related algorithm sVAT (scalable visual assessment of clustering tendency). We demonstrate that our modified sampling scheme can eliminate over-sampling of the original progressive sampling scheme, thus enabling the processing of truly VL data. Numerical experiments on a distance matrix of a set of 3,000,000 vectors drawn from a mixture of 5 bivariate normal distributions demonstrate the feasibility and effectiveness of the proposed sampling method. We also find that actually running eNERF on a data set of this size is very costly in terms of computation time. Thus, our results demonstrate that further modification of eNERF, especially the extension stage, will be needed before it is truly practical for VL data. © 2008 Wiley Periodicals, Inc.

28 citations


Journal IssueDOI
TL;DR: An ontology system is proposed to represent the knowledge structure enabling fuzzy information to be stored in fuzzy databases and this ontology then acts as an interface that formalizes the representation of such structures and allows access to them.
Abstract: In this paper, an ontology system is proposed to represent the knowledge structure enabling fuzzy information to be stored in fuzzy databases. This proposal allows users or applications to simplify the metadata definition process that is necessary for representing and managing imprecise and classic information in these databases. This ontology then acts as an interface that formalizes the representation of such structures and allows access to them. The instances obtained from this ontology represent the schemas that describe domain information in a database. The description of fuzzy and classic database schemas allows access to online public databases for which no other semantic description is associated. This paper also presents another ontology to represent these schemas as instances. Not only does this ontology allow fuzzy data values to be stored (because of the definition of fuzzy data types as classes of the ontology) but it also enables schema tables and attributes to be defined. © 2008 Wiley Periodicals, Inc.

26 citations


Journal IssueDOI
TL;DR: The fuzzy analytic hierarchy process, which takes monetary (fuzzy real option value) and nonmonetary (capability, success probability, trends, etc.) criteria into account, is used to make this selection among alternative R&D projects.
Abstract: R&D project selection decision is very important in two ways. First, in many organizations, R&D budget represents huge investment. Project selection decisions could be thought with the strategic objectives and plans of the firm. Second, R&D projects' organizational returns are multidimensional in nature and risky in terms of projected outcome. Real options approach helps to calculate this risky side of the selection process. This paper considers that multidimensional side of the R&D project selection process. Another consideration is the vagueness in the evaluation process. The fuzzy analytic hierarchy process, which takes monetary (fuzzy real option value) and nonmonetary (capability, success probability, trends, etc.) criteria into account, is used to make this selection among alternative R&D projects. A real case study is given to illustrate the application of the proposed approach. © 2008 Wiley Periodicals, Inc.

24 citations


Journal IssueDOI
TL;DR: This work proposes an algorithm that uses both approaches to use the benefits of both mechanisms, allowing a higher performance and experiments show that executing both learning phases improves the results obtained executing only the first one.
Abstract: When applying reinforcement learning in domains with very large or continuous state spaces, the experience obtained by the learning agent in the interaction with the environment must be generalized. The generalization methods are usually based on the approximation of the value functions used to compute the action policy and tackled in two different ways. On the one hand by using an approximation of the value functions based on a supervized learning method. On the other hand, by discretizing the environment to use a tabular representation of the value functions. In this work, we propose an algorithm that uses both approaches to use the benefits of both mechanisms, allowing a higher performance. The approach is based on two learning phases. In the first one, a learner is used as a supervized function approximator, but using a machine learning technique which also outputs a state space discretization of the environment, such as nearest prototype classifiers or decision trees do. In the second learning phase, the space discretization computed in the first phase is used to obtain a tabular representation of the value function computed in the previous phase, allowing a tuning of such value function approximation. Experiments in different domains show that executing both learning phases improves the results obtained executing only the first one. The results take into account the resources used and the performance of the learned behavior. © 2008 Wiley Periodicals, Inc.

24 citations


Journal IssueDOI
TL;DR: The article shows how it is possible to generalize the concordance-discordance principle in preference aggregation and apply it to the problem of aggregating preferences expressed under intervals.
Abstract: The article discusses the use of positive and negative reasons when preferences about alternative options have to be considered. Besides explaining the intuitive and formal situations where such a bipolar reasoning is used, the article shows how it is possible to generalize the concordance-discordance principle in preference aggregation and apply it to the problem of aggregating preferences expressed under intervals. © 2008 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: An overview of fuzzy integrals, including historical remarks, and an application of the Choquet integral to additive impreciseness measuring of fuzzy quantities with interesting consequences for fuzzy measures are presented.
Abstract: We bring an overview of fuzzy integrals, including historical remarks. These integrals can be viewed as an average membership value of fuzzy sets, and they are related to fuzzy measures. The Choquet integral can be traced back to 1925. The Sugeno integral has a predecessor in the Shilkret integral from 1971. Some other fuzzy integrals and the corresponding discrete integrals are introduced too. A closer look to the geometric interpretation of fuzzy integrals is also given, resulting among others the weakest and the strongest regular fuzzy integral. An application of the Choquet integral to additive impreciseness measuring of fuzzy quantities with interesting consequences for fuzzy measures is presented. Finally, recent development and streaming of the fuzzy integral theory is discussed. © 2008 Wiley Periodicals, Inc. This paper is an extended version of MDAI'2004 contribution.

Journal IssueDOI
TL;DR: The standard proximity measure of the cosine correlation is generalized in the multiset model, and two nonlinear clustering techniques are applied to the existing clustering methods, including a variable for controlling cluster volume sizes and a kernel trick used in support vector machines.
Abstract: Fuzzy multiset is applicable as a model of information retrieval because it has the mathematical structure that expresses the number and the degree of attribution of an element simultaneously. Therefore, fuzzy multisets can be used also as a suitable model for document clustering. This paper aims at developing clustering algorithms based on a fuzzy multiset model for document clustering. The standard proximity measure of the cosine correlation is generalized in the multiset model, and two nonlinear clustering techniques are applied to the existing clustering methods. One introduces a variable for controlling cluster volume sizes; the other one is a kernel trick used in support vector machines. Moreover, clustering by competitive learning is also studied. When the kernel trick has been used the classification configuration of data in a high-dimensional feature space is visualized by self-organizing maps. Two numerical examples, which use an artificial data and real document data, are shown and effects of the proposed methods are discussed. © 2008 Wiley Periodicals, Inc.

Journal ArticleDOI
JingTao Yao1
TL;DR: The fundamental issues of Web-based Support Systems, a framework of WSS, and research on WSS are presented and preliminary studies on two examples of W SS are presented.
Abstract: We view Web-based Support Systems (WSS) as a multidisciplinary research area that focuses on supporting human activities in specific domains or fields based on computer science, information technology, and Web technology. Research on WSS is motivated by the challenges and opportunities of the Internet and the Web. The recent advancements of computer and Web technologies make the implementation of WSS feasible. This paper presents the fundamental issues of WSS, a framework of WSS, and research on WSS. We also present preliminary studies on two examples of WSS, Web-based research support systems (WRSS) and Web-based information retrieval support systems (WIRSS).

Journal IssueDOI
TL;DR: In the framework of aggregation by the discrete Choquet integral, the unsupervized method for the identification of the underlying capacity initially put forward in Kojadinovic, Eur J Oper Res 2004; 155:741–751 is presented and improvements are proposed.
Abstract: In the framework of aggregation by the discrete Choquet integral, the unsupervized method for the identification of the underlying capacity initially put forward in Kojadinovic, Eur J Oper Res 2004; 155:741–751 is presented and improvements are proposed. The suggested approach consists in replacing the subjective notion of importance of a subset of attributes by that of information content of a subset of attributes, which can be estimated from the set of profiles by means of an entropy measure. An example of the application of the proposed methodology is given: in the absence of initial preferences, the approach is applied to the evaluation of students. © 2008 Wiley Periodicals, Inc. This paper is a revised and extended version with proofs of the conference paper.

Journal ArticleDOI
TL;DR: The key new approach in this algorithm is to use a diversity-emphasizing probabilistic approach in determining whether an off-spring individual is considered in the replacement selection phase, along with the use of a non-domination ranking scheme.
Abstract: A . new evolutionary algorithm is proposed for solving multi-objective optimization problems, focusing on the issue of developing a diverse population of non-dominated solutions. The key new approach in this algorithm is to use a diversity-emphasizing probabilistic approach in determining whether an off-spring individual is considered in the replacement selection phase, along with the use of a non-domination ranking scheme. This evolutionary multi-objective crowding algorithm (EMOCA) is evaluated using nine benchmark multi-objective optimization problems and shown to produce nondominated solutions with significant diversity, outperforming three state-ofthe-art multi-objective evolutionary algorithms on most ofthe test problems.

Journal IssueDOI
TL;DR: The results demonstrate that the hybrid methods presented in this article outperform, in most cases, existing approaches in terms of classification accuracy, and in addition, achieve a significant reduction in the classification time.
Abstract: Most Web content categorization methods are based on the vector space model of information retrieval. One of the most important advantages of this representation model is that it can be used by both instance-based and model-based classifiers. However, this popular method of document representation does not capture important structural information, such as the order and proximity of word occurrence or the location of a word within the document. It also makes no use of the markup information that can easily be extracted from the Web document HTML tags. A recently developed graph-based Web document representation model can preserve Web document structural information. It was shown to outperform the traditional vector representation using the k-Nearest Neighbor (k-NN) classification algorithm. The problem, however, is that the eager (model-based) classifiers cannot work with this representation directly. In this article, three new hybrid approaches to Web document classification are presented, built upon both graph and vector space representations, thus preserving the benefits and overcoming the limitations of each. The hybrid methods presented here are compared to vector-based models using the C4.5 decision tree and the probabilistic Naive Bayes classifiers on several benchmark Web document collections. The results demonstrate that the hybrid methods presented in this article outperform, in most cases, existing approaches in terms of classification accuracy, and in addition, achieve a significant reduction in the classification time. © 2008 Wiley Periodicals, Inc.

Journal IssueDOI
Thomas A. Runkler1
TL;DR: A wasp swarm optimization (WSO) algorithm to optimize the c-means clustering model is introduced and in experiments with four benchmark data sets, the new WSO clustering algorithm is compared with AO and ACO.
Abstract: This paper deals with clustering by optimizing the c-means clustering model. For some data sets this clustering model possesses many local optima, so conventional alternating optimization (AO) will produce bad results. For obtaining good clustering results, the minimization procedure has to be kept from being trapped in these local optima, for example, by stochastic optimization approaches. Recently, we showed that ant colony optimization (ACO) can be effectively applied to the c-means clustering model. In this paper, we introduce a wasp swarm optimization (WSO) algorithm to optimize the c-means clustering model. In experiments with four benchmark data sets, the new WSO clustering algorithm is compared with AO and ACO. For data sets leading to c-means models without local optima, both WSO and AO perform better and faster than ACO. For data sets leading to multiple local optima, WSO clearly outperforms both AO and ACO. © 2008 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: In this article, the authors investigated the psychological plausibility of the bipolarity concept, i.e., that positive and negative kinds of information are treated differently, and provided new data supporting the idea that even when considering how affective changes occur, a certain level of independence exists between the positive or negative sides of affect.
Abstract: This paper investigates the psychological plausibility of the bipolarity concept, i.e., that positive and negative kinds of information are treated differently. Sections 2 and 3 review relevant investigations of the representational and affective systems in the experimental psychology literature. Section 4 provides new data supporting the idea that even when considering how affective changes occur, a certain level of independence exists between the positive and negative sides of affect. Together the studies reported here strongly support the psychological plausibility of bipolarity: Positive and negative kinds of information are not processed in the same way whichever domain is considered, preferences (affect) or beliefs (mental categories). © 2008 Wiley Periodicals Inc.

Journal IssueDOI
TL;DR: A set of operators that, starting from a given FRBS, adapt the FRBS to the specific context by adjusting the universes of the input and output variables, and modifying the core, the support and the shape of the fuzzy sets which compose the partitions of these universes are proposed.
Abstract: Context adaptation is certainly a promising approach in the development of fuzzy rule based systems (FRBSs) First, an initial rule base is extracted from heuristic knowledge of the application domain Meanings of linguistic terms are defined so as to guarantee high interpretability of the FRBSs Then, meanings are adapted to a specific context through the use of operators that, using a set of known input–output patterns, appropriately modify the corresponding fuzzy sets The choice of the specific operators and their parameters is context based and optimized so as to obtain a good interpretability–accuracy trade-off In this paper, we propose a set of operators that, starting from a given FRBS, adapt the FRBS to the specific context by adjusting the universes of the input and output variables, and modifying the core, the support and the shape of the fuzzy sets which compose the partitions of these universes The operators are defined so as to preserve ordering of the linguistic terms, universality of rules, and interpretability of partitions The choice of the parameters used in the operators is performed by a genetic optimization process aimed at maximizing the accuracy and preserving the interpretability of the FRBS We finally describe the application of our context adaptation approach to two Mamdani fuzzy systems developed, respectively, for two different domains, namely, regression and data modeling © 2008 Wiley Periodicals, Inc

Journal IssueDOI
TL;DR: The usefulness of the Choquet integral for modeling decision under risk and uncertainty is shown in this article, where it is shown that some paradoxes of expected utility theory are solved using theChoquet integral.
Abstract: The usefulness of the Choquet integral for modeling decision under risk and uncertainty is shown. It is shown that some paradoxes of expected utility theory are solved using the Choquet integral. Necessary and sufficient conditions for the Choquet expected utility model for decision under uncertainty (or rank dependent utility model for decision under risk) being the same as its simplified versions are presented. © 2008 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: Prox is a stochastic method to map the local and global structures of real-world complex networks, which are called small worlds, and transforms a graph into a Markov chain; the states of which are the nodes of the graph in question.
Abstract: Prox is a stochastic method to map the local and global structures of real-world complex networks, which are called small worlds. Prox transforms a graph into a Markov chain; the states of which are the nodes of the graph in question. Particles wander from one node to another within the graph by following the graph's edges. It is the dynamics of the particles' trajectories that map the structural properties of the graphs that are studied. Concrete examples are presented in a graph of synonyms to illustrate this approach. © 2008 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: This article applies aggregation operators for extracting relevant information for the nonstandard case when the attributes are not the same in numerical databases.
Abstract: Given two-data databases, record linkage algorithms try to establish which records of these files contain information on the same individual. Standard record linkage algorithms assume that both files are described using the same attributes. In this article, we study the nonstandard case when the attributes are not the same. We apply aggregation operators for extracting relevant information for this purpose. We restrict to the case of numerical databases. © 2008 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: Instead of maximizing the entropy in the formulation for determining the MEOWA weights, a new method tries to obtain the OWA weights that are evenly spread out around equal weights as much as possible while strictly satisfying the orness value provided in the program.
Abstract: The ordered weighted averaging (OWA) operator by Yager (IEEE Trans Syst Man Cybern 1988; 18; 183–190) has received much more attention since its appearance. One key point in the OWA operator is to determine its associated weights. Among numerous methods that have appeared in the literature, we notice the maximum entropy OWA (MEOWA) weights that are determined by taking into account two appealing measures characterizing the OWA weights. Instead of maximizing the entropy in the formulation for determining the MEOWA weights, a new method in the paper tries to obtain the OWA weights that are evenly spread out around equal weights as much as possible while strictly satisfying the orness value provided in the program. This consideration leads to the least-squared OWA (LSOWA) weighting method in which the program is to obtain the weights that minimize the sum of deviations from the equal weights since entropy is maximized when all the weights are equal. Above all, the LSOWA method allocates the positive and negative portions to the equal weights that are identical but opposite in sign from the middle point in the number of criteria. Furthermore, interval LSOWA weights can be constructed when a decision maker specifies his or her orness value in uncertain numerical bounds and we present a method, with those uncertain interval LSOWA weights, for prioritizing alternatives that are evaluated by multiple criteria. © 2008 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: In this paper, the authors explored several facets of bipolarity in human reasoning and affective decision making, including positive and negative pieces of information help to discriminate between classical forms of reasoning (deduction, induction, and abduction).
Abstract: This article explores several facets of bipolarity in human reasoning and affective decision making. First, it examines how positive and negative pieces of information help to discriminate between classical forms of reasoning (deduction, induction, and abduction). It is shown that (1) both positive and negative information can independently account for these distinctions and (2) these same distinctions can be accounted for by a possibilistic analysis of the plausibility of the states of the world ruled out by the premises and the ones compatible with these premises. Second, it is shown that an analysis of the plausibility (“impossible,” “guaranteed possible,” “nonimpossible”) of the states of the world ruled out or allowed by positive or negative pieces of information in human hypothesis testing allows us to explain some puzzling psychological results. Next, bipolarity is explored in the domain of affective decision making. It is proposed notably that the combination of the bivariate bipolarity of emotions (negative, neutral, positive) and the multivariate bipolarity of emotions of comparison provide the tools for an emotional reasoning and decision making which might be the way by which we actually evaluate possible situations and take our decisions, instead of maximizing our expected utility. © 2008 Wiley Periodcals, Inc.

Journal ArticleDOI
TL;DR: An investigation into the comparative performance of intelligent system identification and control algorithms within the framework of an active vibration control (AVC) system using Evolutionary Genetic algorithms and Adaptive Neuro-Fuzzy Inference system algorithms.
Abstract: This paper presents an investigation into the comparative performance of intelligent system identification and control algorithms within the framework of an active vibration control (AVC) system. Evolutionary Genetic algorithms (GAs) and Adaptive Neuro-Fuzzy Inference system (ANFIS) algorithms are used to develop mechanisms of an AVC system, where the controller is designed based on optimal vibration suppression using the plant model. A simulation platform of a flexible beam system in transverse vibration using finite difference (FD) method is considered to demonstrate the capabilities of the AVC system using GAs and ANFIS. MATLAB GA tool box for GAs and Fuzzy Logic tool box for ANFIS function are used to design the AVC system. The system is men implemented, tested and its performance assessed for GAs and ANFIS based algorithms. Finally, a comparative performance of the algorithms in implementing system identification and corresponding AVC system using GAs and ANFIS is presented and discussed through a set of experiments.

Journal IssueDOI
TL;DR: This article provides a review of the main logics for preference representation and focuses on the possibilistic logic setting and then discusses two other logics: qualitative choice logic and penalty logic.
Abstract: Bipolar preferences distinguish between negative preferences inducing what is acceptable by complementation and positive preferences representing what is really satisfactory. This article provides a review of the main logics for preference representation. Representing preferences in a bipolar logical way has the advantage of enabling us to reason about them, while increasing their expressive power in a cognitively meaningful way. In the article, we first focus on the possibilistic logic setting and then discuss two other logics: qualitative choice logic and penalty logic. Finally, an application of bipolar preferences querying systems is outlined. © 2008 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A new noise filtering method, called soft decision tree noise filter (SDTNF), to identify and remove mislabeled data items in a data set, which is capable of identifying a substantial amount of noise and significantly improving performance of nearest neighbor classifiers at a wide range of noise levels.
Abstract: In this paper we present a new noise filtering method, called soft decision tree noise filter (SDTNF), to identify and remove mislabeled data items in a data set. In this method, a sequence of decision trees are built from a data set, in which each data item is assigned a soft class label (in the form of a class probability vector). A modified decision tree algorithm is applied to adjust the soft class labeling during the tree building process. After each decision tree is built, the soft class label of each item in the data set is adjusted using the decision tree’s predictions as the learning targets. In the next iteration, a new decision tree is built from a data set with the updated soft class labels. This tree building process repeats iteratively until the data set labeling converges. This procedure provides a mechanism to gradually modify and correct mislabeled items. It is applied in SDTNF as a filtering method by identifying data items whose classes have been relabeled by decision trees as mislabeled data. The performance of SDTNF is evaluated using 16 data sets drawn from the UCI data repository. The results show that it is capable of identifying a substantial amount of noise for most of the tested data sets and significantly improving performance of nearest neighbor classifiers at a wide range of noise levels. We also compare SDTNF to the consensus and majority voting methods proposed by Brodley and Friedl [1996, 1999] for noise filtering. The results show SDTNF has a more efficient and balanced filtering capability than these two methods in terms of filtering mislabeled data and keeping non-mislabeled data. The results also show that the filtering capability of SDTNF can significantly improve the performance of nearest neighbor classifiers, especially at high noise levels. At a noise level of 40%, the improvement on the accuracy of nearest neighbor classifiers is 13.1% by the consensus voting method and 18.7% by the majority voting method, while SDTNF is able to achieve an improvement by 31.3%.

Journal Article
TL;DR: In this article, a motion analysis system for the upper extremities of lawn bowlers in particular is developed, where accelerometers are placed on parts of human body such as the chest to represent the shoulder movements, the back to capture the trunk motion, back of the hand, the wrist and one above the elbow to capture arm movements.
Abstract: This paper explores the opportunity of using tri-axial wireless accelerometers for supervised monitoring of sports movements. A motion analysis system for the upper extremities of lawn bowlers in particular is developed. Accelerometers are placed on parts of human body such as the chest to represent the shoulder movements, the back to capture the trunk motion, back of the hand, the wrist and one above the elbow, to capture arm movements. These sensors placement are carefully designed in order to avoid restricting bowler's movements. Data is acquired from these sensors in soft-real time using virtual instrumentation; the acquired data is then conditioned and converted into required parameters for motion regeneration. A user interface was also created to facilitate in the acquisition of data, and broadcasting of commands to the wireless accelerometers. All motion regeneration in this paper deals with the motion of the human body segment in the X and Y direction, looking into the motion of the anterior/ posterior and lateral directions respectively.