scispace - formally typeset
Search or ask a question

Showing papers in "Applied Artificial Intelligence in 2010"


Journal ArticleDOI
TL;DR: In this paper, the authors present δ-ontologies, a framework for reasoning with inconsistent DL ontologies, which is based on defeasible logic programs (DeLP).
Abstract: Standard approaches to reasoning with description logics (DL) ontologies require them to be consistent. However, as ontologies are complex entities and sometimes built upon other imported ontologies, inconsistencies can arise. In this article, we present δ-ontologies, a framework for reasoning with inconsistent DL ontologies. Our proposal involves expressing DL ontologies as defeasible logic programs (DeLP). Given a query posed w.r.t. an inconsistent ontology, a dialectical analysis will be performed on a DeLP program obtained from such an ontology, where all arguments in favor and against the final answer of the query will be taken into account. We also present an application to ontology integration based on the global-as-view approach.

59 citations


Journal ArticleDOI
TL;DR: The positive evaluation of the bulletins produced by MARQUIS by users shows that the use of automatic text generation techniques in such a complex and sensitive application is feasible.
Abstract: Air pollution has a major influence on health. It is thus not surprising that air quality (AQ) increasingly becomes a central issue in the environmental information policy worldwide. The most common way to deliver AQ information is in terms of graphics, tables, pictograms, or color scales that display either the concentrations of the pollutant substances or the corresponding AQ indices. However, all of these presentation modi lack the explanatory dimension; nor can they be easily tailored to the needs of the individual users. MARQUIS is an AQ information generation service that produces user-tailored multilingual bulletins on the major measured and forecasted air pollution substances and their relevance to human health in five European regions. It incorporates modules for the assessment of pollutant time series episodes with respect to their relevance to a given addressee, for planning of the discourse structure of the bulletins and the selection of the adequate presentation mode, and for generation proper. The positive evaluation of the bulletins produced by MARQUIS by users shows that the use of automatic text generation techniques in such a complex and sensitive application is feasible.

44 citations


Journal ArticleDOI
TL;DR: An agent-based management and control system is being developed to enable large-scale deployment of distributed energy resources and will allow consumers who are connected at low levels in the distribution network to manage their energy requirements and participate in coordination responses to network stimuli.
Abstract: This article describes our research in technologies for the management and control of distributed energy resources An agent-based management and control system is being developed to enable large-scale deployment of distributed energy resources Local intelligent agents will allow consumers who are connected at low levels in the distribution network to manage their energy requirements and participate in coordination responses to network stimuli Such responses can be used to reduce the volatility of wholesale electricity prices and assist constrained networks during summer and winter demand peaks In our system, the coordination of energy resources is decentralized Energy resources coordinate each other to realize efficient autonomous matching of supply and demand in large power distribution networks The information exchange is through indirect (or stigmergic) communications between agents The coordination mechanism is asynchronous and adapts to change in an unsupervised manner, making it intrinsically scalable and robust

40 citations


Journal ArticleDOI
TL;DR: A weighted goal programming model is formulated, a simulated annealing algorithm accompanied by a heuristic filling procedure is proposed to solve the model and the computational results have validated significance and usefulness of the proposed approach.
Abstract: In this article, we explored a new approach to solution of multi-objective container-loading problems mostly encountered in transportation and wholesaling industries. Our goal is to load the items (boxes) that would provide the highest total weight to the container in the best possible way. These two objectives (weight maximization and volume utilization) are conflicting because the volume of a box is usually not proportional to its weight. A weighted goal programming model is formulated and presented. A simulated annealing (SA) algorithm accompanied by a heuristic filling procedure is then proposed to solve the model. The proposed algorithm has been first tested on a set of benchmark problems available in the literature and then used for real-world data provided by a distribution company. The computational results have validated significance and usefulness of the proposed approach.

36 citations


Journal ArticleDOI
TL;DR: This article presents experiments of an implementation of L.I.A.R.R., a social control approach more adapted to ODMAS than other approaches, like cryptographic security or centralized institutions, in an agentified peer-to-peer network.
Abstract: Open and decentralized multiagent systems (ODMAS) are particularly vulnerable to the introduction of faulty or malevolent agents. Indeed, such systems rely on collective tasks that are performed collaboratively by several agents that interact to coordinate themselves. It is therefore very important that agents respect the system rules, especially concerning interaction, to achieve successfully these collective tasks. In this article we propose the L.I.A.R. model to control the agents' interactions. This model follows the social control approach that consists of developing an adaptive and auto-organized control, set up by the agents themselves. As being intrinsically decentralized and nonintrusive to the agents' internal functioning, it is more adapted to ODMAS than other approaches, like cryptographic security or centralized institutions. To implement such a social control, agents should be able to characterize interaction they observe and to sanction them. L.I.A.R. includes different formalisms: (i) a social commitment model that enables agents to represent observed interactions, (ii) a model for social norm to represent the system rules, (iii) social policies to evaluate the acceptability of agents interactions, (iv) and a reputation model to enable agents to apply sanctions to their peers. This article presents experiments of an implementation of L.I.A.R. in an agentified peer-to-peer network. These experiments show that L.I.A.R. is able to compute reputation levels quickly, precisely and efficiently. Moreover, these reputation levels are adaptive and enable agents to identify and isolate harmful agents. These reputation levels also enable agents to identify good peers, with which to pursue their interactions.

35 citations


Journal ArticleDOI
TL;DR: This research investigates classification of documents according to the ethnic group of their authors and/or to the historical period when the documents were written for Jewish Law articles written in Hebrew-Aramaic, languages that are rich in their morphological forms.
Abstract: This research investigates classification of documents according to the ethnic group of their authors and/or to the historical period when the documents were written. The classification is done using various combinations of six sets of stylistic features: quantitative, orthographic, topographic, lexical, function, and vocabulary richness. The application domain is Jewish Law articles written in Hebrew-Aramaic, languages that are rich in their morphological forms. Four popular machine learning methods have been applied. The logistic regression method led to the best accuracy results: about 99.6% while classifying to the ethnic group of their authors or to the historical period when the articles were written and about 98.3% while classifying to both classifications. The quantitative feature set was found as very successful and superior to all other sets. The lexical and function feature sets have also been found to be useful. The quantitative and the function features are domain independent and language independent. These two feature sets might be generalized to similar classification tasks for other languages and can therefore be useful for the text classification community at large.

29 citations


Journal ArticleDOI
TL;DR: A fuzzy compositional modeler to represent, reason, and propagate inexact information to support automated generation of crime scenarios and a link-based approach to identifying potential duplicated referents within the generated scenarios are offered.
Abstract: Given a set of collected evidence and a predefined knowledge base, some existing knowledge-based approaches have the capability of synthesizing plausible crime scenarios under restrictive conditions. However, significant challenges arise for problems where the degree of precision of available intelligence data can vary greatly, often involving vague and uncertain information. Also, the issue of identity disambiguation gives rise to another crucial barrier in crime investigation. That is, the generated crime scenarios may often refer to unknown referents (such as a person or certain objects), whereas these seemingly unrelated referents may actually be relevant to the common revealed. Inspired by such observation, this article presents a fuzzy compositional modeler to represent, reason, and propagate inexact information to support automated generation of crime scenarios. Further, the article offers a link-based approach to identifying potential duplicated referents within the generated scenarios. The applicability of this work is illustrated by means of an example for discovering unforseen crime scenarios.

27 citations


Journal ArticleDOI
TL;DR: It is shown that for estimation of BTM and BTMA release profiles, MLP outperforms GRNN and RBF networks in terms of reliability and efficiency.
Abstract: Estimation of release profiles of drugs normally requires time-consuming trial-and-error experiments. Feed-forward neural networks including multilayer perceptron (MLP), radial basis function network (RBFN), and generalized regression neural network (GRNN) are used to predict the release profile of betamethasone (BTM) and betamethasone acetate (BTMA) where in situ forming systems consist of poly (lactide-co-glycolide), N-methyl-1-2-pyrolidon, and ethyl heptanoat as a polymer, solvent, and additive, respectively. The input vectors of the artificial neural networks (ANNs) include drug concentration, gamma irradiation, additive substance, and type of drug. As the outputs of the ANNs, three features are extracted using the nonlinear principal component analysis technique. Leave-one-out cross-validation approach is used to train each ANN. We show that for estimation of BTM and BTMA release profiles, MLP outperforms GRNN and RBF networks in terms of reliability and efficiency.

26 citations


Journal ArticleDOI
TL;DR: This article presents a new light-weighted inductive algorithm for concept drifting detection in virtue of an ensemble model of random decision trees (named CDRDT) to distinguish various types of concept drifts from noisy data streams in this article.
Abstract: Detecting concept drifts and reducing the impact from the noise in real applications of data streams are challenging but valuable for inductive learning. It is especially a challenge in a light demand on the overheads of time and space. However, though a great number of inductive learning algorithms based on ensemble classification models have been proposed for handling concept drifting data streams, little attention has been focused on the detection of the diversity of concept drifts and the influence from noise in data streams simultaneously. Motivated by this, we present a new light-weighted inductive algorithm for concept drifting detection in virtue of an ensemble model of random decision trees (named CDRDT) to distinguish various types of concept drifts from noisy data streams in this article. We use variably small data chunks to generate random decision trees incrementally. Meanwhile, we introduce the inequality of Hoeffding bounds and the principle of statistical quality control to detect the different types of concept drifts and noise. Extensive studies on synthetic and real streaming data demonstrate that CDRDT could effectively and efficiently detect concept drifts from the noisy streaming data. Therefore, our algorithm provides a feasible reference framework of classification for concept drifting data streams with noise.

19 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed approach leads to significant performance improvements over existing dispatching rules and it is confirmed that SVMs perform better than other traditional machine learning algorithms as the inductive learning when applied to FMS scheduling problem, due to their better generalization capability.
Abstract: Dispatching rules are usually applied to dynamically schedule jobs in flexible manufacturing systems (FMSs). Despite their frequent use a significant drawback is that the performance level of the rule is dictated by the current state of the manufacturing system. Because no rule is better than any other for every system state, it would be highly desirable to know which rule is the most appropriate for each given condition. To achieve this goal we propose a scheduling approach using support vector machines (SVMs). By using this technique and by analyzing the earlier performance of the system, “scheduling knowledge” is obtained whereby the right dispatching rule at each particular moment can be determined. Simulation results show that the proposed approach leads to significant performance improvements over existing dispatching rules. In the same way it is also confirmed that SVMs perform better than other traditional machine learning algorithms as the inductive learning when applied to FMS scheduling problem, due to their better generalization capability.

17 citations


Journal ArticleDOI
TL;DR: A Gentzen-type sequent calculus for SLTL is introduced, and the completeness and cut-elimination theorems for it are proved, and it is shown to be PSPACE-complete and embeddable into LTL.
Abstract: We propose a proof system for reasoning on certain specifications of secure authentication systems. For this purpose, a new logic, sequence-indexed linear-time temporal logic (SLTL), is obtained semantically from standard linear-time temporal logic (LTL) by adding a sequence modal operator that represents a sequence of symbols. By this sequence modal operator, we can appropriately express message flows between clients and servers and states of servers in temporal reasoning. A Gentzen-type sequent calculus for SLTL is introduced, and the completeness and cut-elimination theorems for it are proved. SLTL is also shown to be PSPACE-complete and embeddable into LTL.

Journal ArticleDOI
TL;DR: This article proposes a study of several different alternative extensions of the DDL semantics, and selects the one that, according to the analysis, constitutes a good compromise, and provides a sound and complete tableaux decision procedure.
Abstract: Distributed description logics (DDL) enable reasoning with multiple ontologies interconnected by directional semantic mapping, called bridge rules. Bridge rules map concepts of a source ontology into concepts of a target ontology. Concept subsumptions of the source ontology can be propagated according to a propagation pattern expressed by means of bridge rules into concept subsumptions of the target ontology. In the basic formulation of DDL, such a propagation is mostly limited to cases when pairs of ontologies are directly linked by means of bridge rules. However, when more than two ontologies are involved, one would expect that subsumption propagates along chains of ontologies linked by bridge rules, but the semantics of DDL is too weak to support this behavior. In a recent study, an adjusted semantics for DDL that supports subsumption propagation through chains of bridge rules has been introduced. This study makes use of a so-called compositional consistency requirement that has been employed before in package-based description logics. While the results concerning subsumption propagation under the adjusted semantics are encouraging, there are important drawbacks. In this article we take a wider perspective, and propose a study of several different alternative extensions of the DDL semantics. For each of them we study the formal properties, and we select the one that, according to our analysis, constitutes a good compromise, and for this case we provide a sound and complete tableaux decision procedure.

Journal ArticleDOI
TL;DR: This work argues that a formal ontology—in this case a description logic knowledge base—should be created in a linguistically motivated way so that domain experts can understand and query the ontology and suggests the use of a controlled natural language for creating and querying formal ontologies that follows the syntactic structure of a well-defined subset of English.
Abstract: Formal ontologies are difficult to read and understand for domain experts who do not have a background in formal logic. This severely restricts the ability of this user group to determine whether an ontology conforms to the requirements of the application domain or not. We argue that a formal ontology—in our case a description logic knowledge base—should be created in a linguistically motivated way so that domain experts can understand and query the ontology. We first show that this can be partially achieved with the help of a naming convention which is based on those linguistic expressions that occur in the application domain. We then go a step further and suggest the use of a controlled natural language for creating and querying formal ontologies that follows the syntactic structure of a well-defined subset of English.

Journal ArticleDOI
TL;DR: A fast and effective algorithm based on constraint programming is proposed for the solution of fresh goods distribution of a retail chain store in Turkey and results indicate considerable improvement in the performance of the firm.
Abstract: This article considers fresh goods distribution of a retail chain store in Turkey. The problem is formulated as a vehicle routing problem with a heterogeneous fleet for which no exact algorithm has ever been designed to solve it. A fast and effective algorithm based on constraint programming is proposed for the solution. The procedure is tested on some of the benchmark problems in literature. The real-life case is first solved assuming that delivery of a customer cannot be split between vehicles. Then it is resolved considering split deliveries. Solutions of both strategies are compared with the current performance of the firm to determine a distribution strategy. Results indicate considerable improvement in the performance of the firm.

Journal ArticleDOI
TL;DR: A set of reusable ontologies has been developed as separate components within a knowledge representation (KR) system, to generically model a reef system, and aims to leverage from the scalable characteristics of semantic technologies to allow for flexibility when posing domain and locality-specific hypotheses.
Abstract: A set of reusable ontologies has been developed as separate components within a knowledge representation (KR) system, to generically model a reef system The ontology design, ranging from light to heavyweight, aims to leverage from the scalable characteristics of semantic technologies to allow for flexibility when posing domain and locality-specific hypotheses, such as predicting coral bleaching The Semantic Reef Project is an eco-informatics application designed to test ecological hypotheses to derive information about environmental systems The intention is to develop an automated data processing, problem-solving, and knowledge discovery system that will assist in developing our understanding and management of coral reef ecosystems Remote environmental monitoring (including sensor networks) is being widely developed and used for collecting real-time data across widely distributed locations As the volume of raw data increases, it is envisaged that bottlenecks will develop in the data analysis phases because current data processing procedures still involve manual manipulation and will soon become unfeasible to manage Ontologies provide a new approach and methodology for modulating this data overflow while also improving our ability to extract knowledge from the data collected

Journal ArticleDOI
TL;DR: An architecture called MAgArRO is presented to optimize the rendering process in a distributed, noncentralized way through a multiagent solution, by making use of expert knowledge or previous jobs to reduce the final rendering.
Abstract: Rendering is the process of generating a 2D image from the abstract description of a 3D scene. In spite of the development of new techniques and algorithms, the computational requirements of photorealistic rendering are huge so that it is not possible to render them in real time. In addition, the adequate configuration of rendering quality parameters is very difficult to be done by inexpert users, and they are usually set higher than in fact are needed. This article presents an architecture called MAgArRO to optimize the rendering process in a distributed, noncentralized way through a multiagent solution, by making use of expert knowledge or previous jobs to reduce the final rendering. Experimental results prove that this novel approach offers a promising research line to optimize the rendering of photorealistic images.

Journal ArticleDOI
TL;DR: In this method, a region covariance matrix is constructed by using bi-orthogonal and Gabor wavelet features and illumination intensity and pixel location and use it as an efficient and robust ear descriptor for ear recognition.
Abstract: Here we propose bi-orthogonal and Gabor wavelet-based region covariance matrices as a novel method, which is robust to changes in illumination and pose variations, for ear recognition. In this method we construct a region covariance matrix by using bi-orthogonal and Gabor wavelet features and illumination intensity and pixel location and use it as an efficient and robust ear descriptor. We performed our experimental studies comparing various ear recognition methods, including our proposed method, with the PCA + RBFN method, the ICA + RBFN method, the Hmax + SVM method, the LSBP method, conventional RCM-based method, and the GRCM method. Superiority of our new method has been successfully tested on ear recognition using 488 images corresponding to 137 subjects from two databases 1 and 2 in USTB database. Our proposed method achieves the average accuracy of 96.6% and 93.5%, respectively, on the databases 1 and 2 for ear recognition.

Journal ArticleDOI
TL;DR: The objective is to find the applicability of soft computing techniques, swarm-intelligence based neural network, and adaptive fuzzy models in the prediction of boiling HTC in terms of root mean square of prediction error.
Abstract: In this article pool boiling heat transfer coefficient (HTC) of liquids (isopropanol, methanol, and distilled water) on copper-coated heating tubes over a wide range of pressure conditions is computed experimentally. The objective is to find the applicability of soft computing techniques, swarm-intelligence based neural network, and adaptive fuzzy models in the prediction of boiling HTC. The results are compared with those computed experimentally. The performance of models for prediction of HTC is analyzed in terms of root mean square of prediction error. The minimum/maximum value obtained by zero-order fuzzy model with six membership function is 0.0023/3.4383 among all the liquids considered. The model is found to predict HTC with a maximum error of ±0.5% for boiling of liquids over all the coated tubes with pressure varying from atmospheric to subatmospheric levels. The study shows an excellent agreement between the experimental and predicted data.

Journal ArticleDOI
TL;DR: A model for image indexing is presented that bridges the gap between the visual content (or low-level descriptors) and the semantic content ( or concepts) and thereby searching in these large databases for a particular information or pattern becomes more efficient and more reliable.
Abstract: Image databases are becoming large and of potential use in many areas, including medical diagnosis, astronomy, and the Web. These images, if analyzed, can reveal useful and potential information. Image indexing is the process of extracting and modeling the content of the image, the image data relationships, or other patterns not explicitly stored. Done this way, images could be indexed with the extracted knowledge, and thereby searching in these large databases for a particular information or pattern becomes more efficient and more reliable. For example, a medical doctor can search a medical database for already-diagnosed patient images having the same symptoms as the one at hand. Here we present a model for image indexing that bridges the gap between the visual content (or low-level descriptors) and the semantic content (or concepts). In our proposal, an image is modeled as being a set of objects, and each object is modeled with visual and semantics contents. Determination of the distinct objects (or segmentation) of the image is achieved through Possibilistic Fuzzy clustering. Thereafter, the visual content (namely, color, texture, and shape) of each object is computed using image processing techniques. Subsequently, each object is presented to the Concept Object Knowledge Base (COKB) for extraction of the semantic content using shape-based recognition. This knowledge base is constructed via neural learning with Adaptive Resonance Theory networks. Experimentation on standard large image databases reveal a good performance of our model.

Journal ArticleDOI
TL;DR: Comparison between artificial neural networks and statistical methods to predict the degree of acidity (pH) in the coastal waters along the Gaza beach finds that the predictions of neural networks are better than the conventional methods.
Abstract: Coastal water issues are gaining worldwide attention because of their impact on health and other environmental problems. This article is concerned with the comparison between artificial neural networks and statistical methods to predict the degree of acidity (pH) in the coastal waters along the Gaza beach. Multilayer perceptron (MLP) and radial basis function (RBF) neural networks are trained and developed with reference to three parameters (water temperature, wind velocity, and turbidity) to predict the level of pH in the seawater. Both networks were developed using the combination of the data collected from nine sites over a period of 4 years, including 294 samples for training and 90 samples for testing the performance of models. The results show that the MLP and RBF models have good ability to predict the pH level. Each network's performance was tested with different sets of data, and the results show satisfactory performance. Results of the developed networks were compared with the statistical regression method and found that the predictions of neural networks are better than the conventional methods. Predictions result show that artificial neural networks approach have good ability for the modeling of pH level in the coastal waters along Gaza beach. It is hoped that neural networks will prove to be a promising alternative to traditional methods used and can contribute in the improvement of the quality of seawater.

Journal ArticleDOI
TL;DR: This work developed a customizable framework for modeling and simulation that supports a reuse of models, simulators, and experimental settings and plays a crucial role in the design of multi-agent systems.
Abstract: The more simulation becomes an established tool in the design of multi-agent systems, the more urgent the question becomes how valid the induced properties and behavior patterns are. Answering this question depends on the validity of the models, the correctness of the simulators, and the simulations. In all of these aspects, reuse and a declarative representation plays a crucial role. With James II, we developed a customizable framework for modeling and simulation. Its component-based architecture supports a reuse of models, simulators, and experimental settings. The benefits of this architecture for agent-based modeling and simulation will be illuminated by an excerpt of a simulation study about trading strategies for mobile ad hoc networks.

Journal ArticleDOI
TL;DR: A novel modular adaptive artful intelligent assistance system for cognitively and/or memory impaired people engaged in the realisation of their activities of daily living (ADLs) to provide logistic support in achieving their ADLs.
Abstract: This article presents a novel modular adaptive artful intelligent assistance system for cognitively and/or memory impaired people engaged in the realisation of their activities of daily living (ADLs). The goal of this assistance system is to help disabled persons moving/evolving within a controlled environment in order to provide logistic support in achieving their ADLs. Empirical results of practical tests are presented and interpreted. Some deductions about the key features that represent originalities of the assistance system are drawn and future works are announced.

Journal ArticleDOI
TL;DR: The approach is shown to allow the straightforward use of an ontology in the context of data sourced from multiple experiments to learn classifiers predicting gene function as part of a cellular response to environmental stress.
Abstract: A key role for ontologies in bioinformatics is their use as a standardized, structured terminology, particularly to annotate the genes in a genome with functional and other properties. Since the output of many genome-scale experiments results in gene sets it is natural to ask if they share a common function. A standard approach is to apply a statistical test for overrepresentation of functional annotation, often within the gene ontology. In this article we propose an alternative to the standard approach that avoids problems in overrepresentation analysis due to statistical dependencies between ontology categories. We apply methods of feature construction and selection to preprocess gene ontology terms used for the annotation of gene sets and incorporate these features as input to a standard supervised machine-learning algorithm. Our approach is shown to allow the straightforward use of an ontology in the context of data sourced from multiple experiments to learn classifiers predicting gene function as part of a cellular response to environmental stress.

Journal ArticleDOI
TL;DR: This work incorporates the optimization problem of two-dimensional infinite impulse response (IIR) recursive filters and the optimization methodology of hybrid multiagent particle swarm optimization (HMAPSO) and applies the resultant optimized IIR filter in image processing for justifying HMAPSO robustness over other algorithm and its role in optimizing real-time situations.
Abstract: We incorporate the optimization problem of two-dimensional infinite impulse response (IIR) recursive filters and the optimization methodology of hybrid multiagent particle swarm optimization (HMAPSO) and then apply the resultant optimized IIR filter in image processing for justifying HMAPSO robustness over other algorithm and its role in optimizing real-time situations. The design of the 2-D IIR filter is reduced to a constrained minimization problem whose robust solution is being achieved by a novel and optimal algorithm HMAPSO. This algorithm integrates the deterministic solution by the multiagent system, the particle swarm optimization (PSO) algorithm, and bee decision-making process. All agents search parallel in an equally distributed lattice-like structure to save energy and computational time as done by the bees in their hive selection process. Thus making use of deterministic search, multiagent PSO, and bee, the HMAPSO realizes the purpose of optimization. Experimental results and the application of the designed filters to focusing the defocused image show that the HMAPSO approach provides better upshots than the previous design methods.

Journal ArticleDOI
TL;DR: A constrained object recognition task that has been robustly solved largely with simple machine learning methods, using a small corpus of about 100 images taken under a variety of lighting conditions, and is complementary to Seewald (2003), which focuses on solving the same task using different sensors.
Abstract: Here we present a constrained object recognition task that has been robustly solved largely with simple machine learning methods, using a small corpus of about 100 images taken under a variety of lighting conditions. The task was to analyze images from a hand-held mobile phone camera showing an endgame position for the Japanese board game Go. The presented system would already be sufficient to reconstruct the full Go game record from a video record of the game and thus is complementary to Seewald (2003), which focuses on solving the same task using different sensors. The presented system is robust to a variety of lighting conditions, works with cheap low-quality cameras, and is resistant to changes in board or camera position without the need for any manual calibration.

Journal ArticleDOI
TL;DR: A new strategy is presented that can be used by neurophysicians, neurosurgeons, and orthopedic surgeons to predict patients' health after an operative procedure on the vertebral column just by analyzing the preoperative patient data.
Abstract: In this article a new strategy is presented that can be used by neurophysicians, neurosurgeons, and orthopedic surgeons to predict patients' health after an operative procedure on the vertebral column just by analyzing the preoperative patient data. Usually, this is done based on the linguistic or heuristic variables related to patient's data, such as marital status, occupation, and so on. There are some numeric variables also involved in the analysis, such as the body mass index, age, and duration of symptoms. Standard Fuzzy Inference System has been developed around mapping the physicians' heuristics, and accordingly the membership degrees and rules have been developed. The results have shown 88% correct prediction on a patient population of 501. Emphasis has been given to overestimate the risk in patients, which is a normal practice in clinical standards for benign diseases. The system is expected to assist medical professionals in making better decisions in terms of posture management, life-style, and pain management postoperatively to prevent the back surgery from failing.

Journal ArticleDOI
TL;DR: The main goal of this work is the development of a multiagents system that would allow thecreation of control agents for the intelligent distributed control system based on agents (SCDIA), including the creation of an agent's source code, its compilation, and incorporation to the SCDIA.
Abstract: The main goal of this work is the development of a multiagents system that would allow the creation of control agents for the intelligent distributed control system based on agents (SCDIA), including the creation of an agent's source code, its compilation, and incorporation to the SCDIA. The SCDIA has a control agents community consisting of five agents that resemble the elements of a closed control loop: coordinator agent, controller agent, measurement agent, acting agent, and specialized agent. Agent development platform JADE was used for developing this system. The system has three main agents: central agent, code generator agent, and behavior agent. These agents communicate with each other to generate the control agents of the SCDIA through the use of a code generation ontology.

Journal ArticleDOI
TL;DR: An artificial immune system (AIS) has been used to realize robust control of a robotic manipulator and can recognize and respond to recognized receptors within a single reference step.
Abstract: An artificial immune system (AIS) has been used to realize robust control of a robotic manipulator. The AIS recognizes “self” and “non-self” operation of a closed-loop system, where self is defined as a condition where controller gains are appropriate for a given manipulator configuration. As configuration changes occur, the changing performance of the system indicates a transition to non-self. When non-self operation is first detected, the corresponding dynamic response is defined as a receptor and a genetic algorithm (GA) is called to optimize the controller for the new configuration. A library of receptors is built as additional configuration changes are experienced. For subsequent self to non-self transitions, new and recorded receptors are compared. In the event of a high correlation between the receptors, previously determined controller gains are implemented without calling the GA. The system is agile and robust and can recognize and respond to recognized receptors within a single reference step.

Journal ArticleDOI
TL;DR: A double approach method for versatile detection of similar objects from the real environment by searching both the shape and textural data from the video image is introduced to solve a generalized location problem without a loss in overall reliability in near–real time.
Abstract: A double approach method for versatile detection of similar objects from the real environment is introduced. By searching both the shape and textural data from the video image, it is possible to solve a generalized location problem without a loss in overall reliability in near-real time. A forest forwarder is used as a proof-of-concept application for the technique.

Journal ArticleDOI
TL;DR: This special issue of the Applied Artificial Intelligence Journal addresses research issues on ontologies, an area that is receiving increased attention from researchers in diverse fields and provides an opportunity for the broader artificial intelligence community to be kept up to date on the current trends in ontology research.
Abstract: This special issue of the Applied Artificial Intelligence Journal addresses research issues on ontologies, an area that is receiving increased attention from researchers in diverse fields. Ontologi...