scispace - formally typeset
Search or ask a question

Showing papers in "Knowledge Based Systems in 2003"


Journal ArticleDOI
TL;DR: It is argued that the proposed R5 model is a new approach to using similarity-based reasoning to unify case base building, case retrieval, and case adaptation, and therefore facilitates the development of CBR with applications.
Abstract: This paper reviews some existing models of case-based reasoning (CBR) such as the R4 model of CBR and proposes a R5 model, in which repartition, retrieve, reuse, revise and retain are the main tasks for the CBR process. The original idea behind this model is that case base building is an important part of CBR and the case base can be built based on partitioning of the possible world of problems and solutions. It argues that the proposed R5 model is a new approach to using similarity-based reasoning to unify case base building, case retrieval, and case adaptation, and therefore facilitates the development of CBR with applications.

119 citations


Journal ArticleDOI
TL;DR: The project investigated the use of ontologies, agents and knowledge based planning techniques to provide support for adaptive workflow or flexible workflow management, especially in the area of new product development within the chemical industries.
Abstract: In recent years, many organisations have found enterprise modelling, especially business process modelling, to be an effective tool for managing organisational change. The application of business processing modelling has brought benefits to many organisations, but the models developed tend to be used for reference during business operations and re-engineering activities; they rarely play an active role in supporting the day-to-day execution of the processes. While workflow management systems are widely used for the streamlined management of 'administrative' business processes, current systems are unable to cope with the more dynamic situations encountered in ad hoc and collaborative processes [1]. A system that supports complex and dynamically changing processes is required. There is increasing interest in making workflow systems more adaptive and and using knowledge-based techniques to provide more flexible process management support than is possible using current workflow systems and. This paper describes the results of a collaborative project between Loughborough University and the University of Edinburgh. ICI and Unilever were industrial partners on the project, providing real business requirements in the application domain. The project investigated the use of ontologies, agents and knowledge based planning techniques to provide support for adaptive workflow or flexible workflow management, especially in the area of new product development within the chemical industries.

94 citations


Journal ArticleDOI
TL;DR: A new algorithm named fuzzy grids based rules mining algorithm (FGBRMA) is proposed to generate fuzzy association rules from a relational database to increase the flexibility for supporting users in making decisions or designing the fuzzy systems.
Abstract: Fuzzy association rules described by the natural language are well suited for the thinking of human subjects and will help to increase the flexibility for supporting users in making decisions or designing the fuzzy systems. In this paper, a new algorithm named fuzzy grids based rules mining algorithm (FGBRMA) is proposed to generate fuzzy association rules from a relational database. The proposed algorithm consists of two phases: one to generate the large fuzzy grids, and the other to generate the fuzzy association rules. A numerical example is presented to illustrate a detailed process for finding the fuzzy association rules from a specified database, demonstrating the effectiveness of the proposed algorithm.

76 citations


Journal ArticleDOI
TL;DR: A hybrid neuro-symbolic problem solving model is presented in which the aim is to forecast parameters of a complex and dynamic environment in an unsupervised way to predict the red tides that appear in the coastal waters of the north west of the Iberian Peninsula.
Abstract: A hybrid neuro-symbolic problem solving model is presented in which the aim is to forecast parameters of a complex and dynamic environment in an unsupervised way. In situations in which the rules that determine a system are unknown, the prediction of the parameter values that determine the characteristic behaviour of the system can be a problematic task. The system employs a case-based reasoning model to wrap a growing cell structures network, a radial basis function network and a set of Sugeno fuzzy models to provide an accurate prediction. Each of these techniques is used in a different stage of the reasoning cycle of the case-based reasoning system to retrieve, adapt and review the proposed solution to the present problem. This system has been used to predict the red tides that appear in the coastal waters of the north west of the Iberian Peninsula. The results obtained from experiments are presented.

74 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel QoS-based multicast routing algorithm based on the genetic algorithms (GA), and the connectivity matrix of edges is used for genotype representation.
Abstract: Computing the bandwidth-delay-constrained least-cost multicast routing tree is an NP-complete problem. In this paper, we propose a novel QoS-based multicast routing algorithm based on the genetic algorithms (GA). In the proposed algorithm, the connectivity matrix of edges is used for genotype representation. Some novel heuristics are also proposed for mutation, crossover, and creation of random individuals. We evaluate the performance and efficiency of the proposed GA-based algorithm in comparison with other existing heuristic and GA-based algorithms by the result of simulation. The proposed algorithm has overcome all of the previous algorithms in the literature.

64 citations


Journal ArticleDOI
TL;DR: This paper attempts to propose an integrated knowledge system to support the extrapolation of projected outcomes of events based on knowledge generated by the relational database model and CBR knowledge model, both of which supplement and complement each other by virtue of their distinct structural features.
Abstract: Case Base Reasoning (CBR), which is characterized by its capability to capture past experience and knowledge for case matching in various applications, is an emerging and well-accepted approach in the implementation Knowledge Management (KM) systems. The data format of CBR belongs to the 'free' type and therefore is dissimilar to the traditional relational data model which emphasizes on specified data fields, field lengths and data types. However, there is a lack of research regarding the seamless integration of these heterogeneous data models for achieving effective data communication, which is essential to enhance business workflow of enterprises. This paper attempts to propose an integrated knowledge system to support the extrapolation of projected outcomes of events based on knowledge generated by the relational database model and CBR knowledge model, both of which supplement and complement each other by virtue of their distinct structural features.

58 citations


Journal ArticleDOI
TL;DR: This paper has applied this new hybrid strategy to the well-known combinatorial optimization problem, the quadratic assignment problem (QAP), and shows that the proposed algorithm belongs to the best heuristics for the QAP.
Abstract: Genetic algorithms (GAs) are among the widely used in various areas of computer science, including optimization problems In this paper, we propose a GA hybridized with so-called ruin and recreate (R and R) procedure We have applied this new hybrid strategy to the well-known combinatorial optimization problem, the quadratic assignment problem (QAP) The results obtained from the experiments on different QAP instances show that the proposed algorithm belongs to the best heuristics for the QAP The power of this algorithm is also demonstrated by the fact that the new best known solutions were found for several QAP instances

47 citations


Journal ArticleDOI
TL;DR: CASPER, an online recruitment search engine, is described, which attempts to address this issue by extending traditional search techniques with a personalization technique that is capable of taking account of user preferences as a means of classifying retrieved results as relevant or irrelevant.
Abstract: Traditional search engine techniques are inadequate when it comes to helping the average user locate relevant information online. The key problem is their inability to recognize and respond to the implicit preferences of a user that are typically unstated in a search query. In this paper we describe CASPER, an online recruitment search engine, which attempts to address this issue by extending traditional search techniques with a personalization technique that is capable of taking account of user preferences as a means of classifying retrieved results as relevant or irrelevant. We evaluate a number of different classification strategies with respect to their accuracy and noise tolerance. Furthermore we argue that because CASPER transfers its personalization process to the client-side it offers significant efficiency and privacy advantages over more traditional server-side approaches.

42 citations


Journal ArticleDOI
TL;DR: It is argued that there are certain concepts within the general domain of Knowledge Management that have not been fully explored and the discipline will benefit from a more detailed look at some of these concepts.
Abstract: This paper argues that there are certain concepts within the general domain of Knowledge Management that have not been fully explored. The discipline will benefit from a more detailed look at some of these concepts. The concepts of risk, gap and strength are the particular concepts that are explored in some more detail within this paper. A reason for describing these elements as concepts rather than terms is discussed. More precise definitions for the concepts described can provide management support about the knowledge resource in decision-making. Several function definitions for risk, gap and strength are offered. Finally, the paper considers how these concepts can influence organisational knowledge management schemes.

42 citations


Journal ArticleDOI
TL;DR: BORM is outlined, its tools, methods and its differences from other similar development methodologies are outlined to capture Knowledge of typical business systems.
Abstract: Business object relationship modelling (BORM) is a development methodology developed to capture Knowledge of typical business systems. It has been in development since 1993 and has proved an increasingly effective method which is popular with both users and developers. The effectiveness gained is largely as a result of a unified and simple method for presenting all aspects of the relevant model. This paper outlines BORM, its tools, methods and its differences from other similar development methodologies.

41 citations


Journal ArticleDOI
TL;DR: This paper investigates the applicability of a Rough Set model and method to discover maximal associations from a collection of text documents, and compares its applicability with that of the maximal association method, and presents an alternative strategy to taxonomies required.
Abstract: In this paper we investigate the applicability of a Rough Set model and method to discover maximal associations from a collection of text documents, and compare its applicability with that of the maximal association method. Both methods are based on computing co-occurrences of various sets of keywords, but it has been shown that by using the Rough Set method, rules discovered are similar to maximal association rules, and it is much simpler than the maximal association method. In addition, we also present an alternative strategy to taxonomies required in the above methods, instead of building taxonomies based on labelled document collections themselves. This is to effectively utilise ontologies which will increasingly be deployed on the Internet.

Journal ArticleDOI
TL;DR: The approach provides a generic view of key RE processes clustered into three groups of activities: requirement elicitation, analysis and negotiation and is supported by a set of knowledge functions aimed at facilitating the requirement engineers in matching customer requirements to product characteristics.
Abstract: The success of requirement specification in new design projects largely depends on an accurate match between customer requirements and company product and process knowledge. Despite the recent developments in the domain there is still a lack of transparency and consistent definition and integration of the activities in requirement engineering (RE). There is also a lack of structured methods for capturing relevant enterprise knowledge and deploying it in support of decision making for requirement specification. This paper reports on the knowledge acquisition and sharing for requirement engineering (KARE) approach for requirement specification of one-of-a-kind complex systems. The approach provides a generic view of key RE processes clustered into three groups of activities: requirement elicitation, analysis and negotiation. The process is supported by a set of knowledge functions aimed at facilitating the requirement engineers in matching customer requirements to product characteristics. The reported research has been developed as part of the ESPRIT collaborative project KARE funded by the European Commission.

Journal ArticleDOI
TL;DR: Fuzzy logic has been incorporated as the reasoning mechanism behind the system and a simplified fuzzy set-handling object has been developed in order to save resources.
Abstract: The development and implementation of an online knowledge-based system for machinability data selection is presented. Fuzzy logic has been incorporated as the reasoning mechanism behind the system. The system has been developed using object oriented programming, dynamic link library and Active-X control. It has been incorporated and tested into Internet and Intranet environment. Possible implementations of the system are suggested to work on real-world environment. A simplified fuzzy set-handling object has been developed in order to save resources. Weighted-centroid rather than union centroid has been used in output defuzzification. Comparison between these two methods is made and discussed.

Journal ArticleDOI
TL;DR: A semiotic method, namely NAM has been chosen as a complement to DEMO for organisational modelling and is used to capture norms (e.g. rules, regulations and conditions) in controlling optional and conditional actions.
Abstract: An organisation is by virtue an information system, in which information is used for communication and coordination of activities. This information system is built upon the organisational infrastructure and is supposed to support the business processes. To study the organisational behaviour in the form of business processes, one needs an effective modelling method to capture dynamics of business processes. In this paper we introduce the DEMO methodology for organisational modelling. An extension of the methodology has been made by incorporating a semiotic method. DEMO is a cross-disciplinary theory for describing and explaining the action of organisations. It contains several model types, each expressed in a specific diagram. They collectively provide the necessary knowledge for information systems development and business process redesign. The process model of DEMO has been discussed in detail in this paper. A need for a facility in DEMO has been identified to formulize rules and conditions for optional and conditional actions. Towards this end, a semiotic method, namely NAM has been chosen as a complement to DEMO for this purpose. After producing process model in terms of DEMO, we use NAM to capture norms (e.g. rules, regulations and conditions). The norms determine the conditions and constrains in controlling optional and conditional actions. They govern the behaviour of actors (agents), normally to decide when certain actions are performed. Norms define clearly the roles, functions, responsibilities and authorities of the actors. The extended DEMO has been applied to a real-life problem for demonstration purposes.

Journal ArticleDOI
TL;DR: The KBMS provides a full life-cycle environment for the development and verification of business rule and expert systems and describes building a small expert system in the KBMS, with emphasis on the verification testing at each stage.
Abstract: As automation of business processes becomes more complex and encompasses less-structured domains, it becomes even more essential that the knowledge used by these processes is verified and accurate. Most application development is facilitated with software tools, but most business rules and expert systems are developed in environments that provide inadequate verification testing. This paper describes an emerging class of applications we refer to as Knowledge Base Management Systems (KBMS). The KMBS provides a full life-cycle environment for the development and verification of business rule and expert systems. We will present an overview of knowledge base verification, the KBMS life-cycle, and the architecture for a KBMS. We then describe building a small expert system in the KBMS, with emphasis on the verification testing at each stage. We conclude with a summary of the benefits of a KBMS.

Journal ArticleDOI
TL;DR: A multi-agent multi-tier architecture of autonomous and goal driven agents that cooperatively assist different users to locate and retrieve information from dynamic and distributed information resources is proposed.
Abstract: With the fast growth of the information space in the Internet and large-scale Intranet computing environments, a new design paradigm is required for information systems. In such environments, the amount, the dynamic, the heterogeneity and the distributed nature of the information make it difficult for a user to locate and retrieve the desired information. Moreover, these computing environments are open environments, where the information resources may join or disjoin at anytime. To this end, this paper proposes a multi-agent multi-tier architecture. These agents are autonomous and goal driven agents that cooperatively assist different users to locate and retrieve information from distributed resources. The system architecture comprises of three tiers. At the front end, User Agents interact with the users to fulfill their interests and preferences. At the back end, Resource Agents access and capture the content and changes of the information resources. At the middle tier, Broker Agents facilitate cooperation among the agents. A prototype of this system is implemented to demonstrate how the agents can transparently cooperate to locate and retrieve information from dynamic and distributed information resources.

Journal ArticleDOI
TL;DR: This paper shows how a Genetic Algorithm approach was used to resolve spatial conflict between objects after scaling, achieving near optimal solutions within practical time constraints.
Abstract: Rendering map data at scales smaller than their source can give rise to map displays exhibiting graphic conflict, such that objects are either too small to be seen or too close to each other to be distinguishable. Furthermore, scale reduction will often require important features to be exaggerated in size, sometimes leading to overlapping features. Cartographic Map generalisation is the process by which any graphic conflict that arises during scaling is resolved. In this paper, we show how a Genetic Algorithm approach was used to resolve spatial conflict between objects after scaling, achieving near optimal solutions within practical time constraints.

Journal ArticleDOI
TL;DR: The prototype presented, goes beyond common approaches to the automation of image indexing and retrieval, by applying a novel method that captures deep semantic relations expressed in the captions that accompany crime-scene photographs.
Abstract: This paper presents work on text-based photograph indexing and retrieval for crime investigation, an application domain where efficient querying of large crime-scene photograph databases is of crucial importance. Automating this task will change current police practices considerably, by bringing 'intelligence' to crime support information systems. The prototype presented, goes beyond common approaches to the automation of image indexing and retrieval, by applying a novel method that captures deep semantic relations expressed in the captions that accompany crime-scene photographs. The extraction of these semantic triples is based on advanced knowledge-based Natural Language Processing technologies and resources.

Journal ArticleDOI
TL;DR: A heuristic path planning algorithm of the mobile robot is replaced with a seed casebase and the upper and lower bounds for the cardinality of the casebase are proved and it is proved that the robot would theoretically find all paths from start to goal.
Abstract: This paper presents a theoretical analysis of a casebase used for mobile robot path planning in dynamic environments. Unlike other case-based path planning approaches, we use a grid map to represent the environment that permits the robot to operate in unstructured environments. The objective of the mobile robot is to learn to choose paths that are less risky to follow. Our experiments with real robots have shown the efficiency of our concept. In this paper, we replace a heuristic path planning algorithm of the mobile robot with a seed casebase and prove the upper and lower bounds for the cardinality of the casebase. The proofs indicate that it is realistic to seed the casebase with some solutions to a path-finding problem so that no possible solution differs too much from some path in the casebase. This guarantees that the robot would theoretically find all paths from start to goal. The proof of the upper bound of the casebase cardinality shows that the casebase would in a long run grow too large and all possible solutions cannot be stored. In order to keep only the most efficient solutions the casebase has to be revised at run-time or some other measure of path difference has to be considered.

Journal ArticleDOI
TL;DR: A frame metadata model is introduced to facilitate the continuous association rules generation in data mining and a new set of association rules can be derived with the update of the source databases by the data operation function in the framemetadata model.
Abstract: Most organizations have large databases that contain a wealth of potentially accessible information. The unlimited growth of data will inevitably lead to a situation in which it is increasingly difficult to access the desired information. There is a need to extract knowledge from data by knowledge discovery in database (KDD). Data mining is the discovery stage of KDD whereas association rule is a possible product. It states a statistical correlation between the occurrence of certain attributes in a database table. Such correlation is continuously changing subject to the new updates in the source database. Data mining association rules are often done by computing the association rules for the whole source database. In this paper, we introduce a frame metadata model to facilitate the continuous association rules generation in data mining. A new set of association rules can be derived with the update of the source databases by the data operation function in the frame metadata model. The frame metadata model consists of two types of classes: static classes and active classes. The active classes are event driven, obtaining data from the database when invoked by a certain event. The static classes describe data of the association rule table. Whenever an update occurs in the existing base relations, a corresponding update will be invoked by an event attribute in the method class which will compute the association rules continuously. The result is an active data mining capable of deriving association rules of a source database continuously or incrementally using frame metadata model.

Journal ArticleDOI
TL;DR: A novel distributed computing environment designed as a simulation tool for the analysis of large and disaggregated data flows and a middleware whose dynamic properties replicate the behaviour of large data flows, i.e. computing objects migrating between the different computing nodes of a local area network.
Abstract: The research described in this paper introduces a novel distributed computing environment designed as a simulation tool for the analysis of large and disaggregated data flows. Our research is based on a mapping between a real-world system, in which the interest of study is the modelling of disaggregated data flows, and a distributed computing environment, both being modelled as a graph. This distributed computing environment is a middleware whose dynamic properties replicate the behaviour of large data flows, i.e. computing objects migrating between the different computing nodes of a local area network. It also supports distribution and processing of objects at the appropriate level of granularity, that is, the nodes of the computing graph. This approach gives a high level of flexibility and scalability to the system. We believe that such a distributed computing environment provides a novel solution for the exploration and understanding of patterns produced by disaggregated-based systems. The potential of our middleware is illustrated by a case study that simulates large people flows for different hall configurations of an airport terminal.

Journal ArticleDOI
TL;DR: It is demonstrated that it is possible to distinguish between both species and country of origin with a high degree of accuracy and that the results are also likely to be suitable for use in court.
Abstract: Conservation is an area in which a great deal of data has been collected over many years. Intelligent Data Analysis offers the possibility of analysing this data in an automatic fashion to map characteristics, identify trends and offer guidance for conservation action. This paper is concerned with the use of techniques of Intelligent Data Analysis for an important task in animal conservation: the identification of the species and origin of illegally traded or confiscated African rhino horn. It builds on an earlier analysis by the African Rhino Specialist Group. It is demonstrated that it is possible to distinguish between both species and country of origin with a high degree of accuracy and that the results are also likely to be suitable for use in court.

Journal ArticleDOI
TL;DR: An efficient algorithm is given based on A O taxonomy which not only derives generalized association rules, but also accesses the database only once, and defines the interestingness of association rules based on the level of the concepts in the taxonomy.
Abstract: We introduce a knowledge-based approach to mine generalized association rules which is sound and interactive. Proposed mining is sound because our scheme uses knowledge for mining for only those concepts that are of interest to the user. It is interactive because we provide a user controllable parameter with the help of which user can interactively mine. For this, we use a taxonomy based on functionality, and a restricted way of generalization of the items. We call such a taxonomy A O taxonomy and the corresponding generalization A O generalization. We claim that this type of generalization is more meaningful since it is based on a semantic-grouping of concepts. We use this knowledge to naturally exploit the mining of interesting negative association rules. We define the interestingness of association rules based on the level of the concepts in the taxonomy. We give an efficient algorithm based on A O taxonomy which not only derives generalized association rules, but also accesses the database only once.

Journal ArticleDOI
TL;DR: The problem of uncertainty is analysed in systems based on cases and a model that shows ways for its determination and handling using probabilistic techniques combined with concepts in the theory of the rough sets is proposed.
Abstract: In knowledge-based systems, as particular systems for the making of decisions, the use of techniques to consider the uncertainty is of special interest. In this article, the problem of uncertainty is analysed in systems based on cases and a model that shows ways for its determination and handling using probabilistic techniques combined with concepts in the theory of the rough sets is proposed. The proposed model is based on a case-based structure, which allows a better retrieval of the cases. The ideas stated are explained through an example.

Journal ArticleDOI
TL;DR: A novel method of knowledge search over the Internet is achieved based on the concept of function driven, connected artificial neural networks and active server pages techniques, and the mean of knowledge storage, acquisition, and representation is portrayed in detail.
Abstract: For overcoming the limitations of traditional computer aided design platform, a framework of the Internet-based intensive product design platform (IPDP) is proposed. The structure of IPDP and the key issues regarding its implementation are introduced. Further, the mean of knowledge storage, acquisition, and representation are portrayed in detail. Based on the concept of function driven, connected artificial neural networks and active server pages techniques, a novel method of knowledge search over the Internet is achieved. Finally, a prototype to demonstrate the feasibility of the structure and the constructional method of IPDP is developed.

Journal ArticleDOI
TL;DR: The XRULER system is described and the results show that it is possible to combine symbolic classifiers into a final symbolic classifier with increase in the accuracy and decrease in the number of final rules.
Abstract: Classification algorithms for large databases have many practical applications in data mining. Whenever a dataset is too large for a particular learning algorithm to be applied, sampling can be used to scale up classifiers to massive datasets. One general approach associated with sampling is the construction of ensembles. Although benefits in accuracy can be obtained from the use of ensembles, one problem is their interpretability. This has motivated our work on trying to use the benefits of combining symbolic classifiers, while still keeping the symbolic component in the learning system. This idea has been implemented in the XRULER system. We describe the XRULER system, as well as experiments performed to evaluate it on 10 datasets. The results show that it is possible to combine symbolic classifiers into a final symbolic classifier with increase in the accuracy and decrease in the number of final rules.

Journal ArticleDOI
TL;DR: The effective network configuration managing algorithm in system is presented by overcoming the fault which cannot be figured out by system alone, and diagnosing and recovering it under consideration of network condition by means of the collaboration among a number of agents distributed in network on management domain.
Abstract: This study presents the management model to manage network configuration fault in system and diagnosis and recovery algorithm through collaboration among the agents. The management model comprises three stages of detection, diagnosis, and recovery, and each uses a set of rule in Rule-Based Reasoning database to diagnose and recover the network configuration fault. And, it also presents the effective network configuration managing algorithm in system by overcoming the fault which cannot be figured out by system alone, and diagnosing and recovering it under consideration of network condition, by means of the collaboration among a number of agents distributed in network on management domain.

Journal ArticleDOI
TL;DR: A suite of Knowledge-based CBA systems for IT Skills, developed at the University of East Anglia, are described, which have been deployed by a leading UK examination body to replace human markers for several of its flagship IT awards.
Abstract: As the use of Information Technology (IT) increases, so does the need for accreditation of IT skills. Of the many Computer-Based Assessment (CBA) systems which claim to assess such skills, most are based on approaches such as multiple choice questions or atomic functions tests within controlled environments. In contrast, most professional qualifications in the UK, assessed by human examiners, focus on the output of authentic skills, that is, complete documents produced during realistic interactions with industry standard software. In order to automate the assessment of such examinations the expertise and knowledge of human examiners must be represented and the authentic nature of the assessment tasks retained. This paper describes a suite of Knowledge-based CBA systems for IT Skills, developed at the University of East Anglia, which have been deployed by a leading UK examination body to replace human markers for several of its flagship IT awards.

Journal ArticleDOI
TL;DR: This work presents an extension to DL-based taxonomic reasoning by means of inference fusion, i.e. the dynamic combination of inferences from distributed heterogeneous reasoners, and proposes a language and reasoning system which uses knowledge bases written in Full-size image (D ℒ)(D)/S and supports hybrid reasoning.
Abstract: We present an extension to DL-based taxonomic reasoning by means of inference fusion, i.e. the dynamic combination of inferences from distributed heterogeneous reasoners. Our approach integrates results from a DL-based system with results from a constraint solver under the direction of a global reasoning coordinator. Inference fusion is performed by (i) processing heterogeneous input knowledge, producing suitable homogeneous input knowledge for each specialised reasoner; (ii) activating each reasoner when necessary, collecting its results and passing them to the other reasoner if appropriate; (iii) combining the results of the two reasoners. We discuss the benefits of our approach and demonstrate our ideas by proposing a language (Full-size image (D ℒ)(D)/S) and a reasoning system (Concor) which uses knowledge bases written in Full-size image (D ℒ)(D)/S and supports hybrid reasoning. We illustrate our ideas with an example.

Journal ArticleDOI
TL;DR: A heuristic algorithm called DPOLYSA is proposed that solves Extended DLRs, as a non-binary disjunctive CSP solver, consisting of disjunctions of linear inequalities, linear disequations and non-linear disequades.
Abstract: Nowadays, many real problems can be modelled as Constraint Satisfaction Problems (CSPs). Some CSPs are considered non-binary disjunctive CSPs. Many researchers study the problems of deciding consistency for Disjunctive Linear Relations (DLRs). In this paper, we propose a new class of constraints called Extended DLRs consisting of disjunctions of linear inequalities, linear disequations and non-linear disequations. This new class of constraints extends the class of DLRs. We propose a heuristic algorithm called DPOLYSA that solves Extended DLRs, as a non-binary disjunctive CSP solver. This proposal works on a polyhedron whose vertices are also polyhedra that represent the nondisjunctive problems. We also present a statistical preprocessing step which translates the disjunctive problem into a non-disjunctive and ordered one in each step.