scispace - formally typeset
Search or ask a question

Showing papers in "Cybernetics and Systems in 2009"


Journal ArticleDOI
TL;DR: A cellular automaton, the floor field model, is presented, which forms the basis for various multi-agent simulations and the role of conflicts and friction effects and their influence on evacuation times is discussed.
Abstract: In recent years, several approaches for modelling pedestrian dynamics have been proposed. However, so far not much attention has been paid to their quantitative validation. Instead the focus has been on the reproduction of empirically observed collective phenomena like the dynamical formation of lanes. Although this gives an indication of the realism of the model, for practical applications as in safety analysis, reliable quantitative predictions are required. We discuss the experimental situation focusing on the fundamental diagram that is essential for calibration. Furthermore, we present a cellular automaton, the floor field model, which forms the basis for various multi-agent simulations. Apart from the properties of its fundamental diagram, we discuss the role of conflicts and friction effects and their influence on evacuation times.

81 citations


Journal ArticleDOI
TL;DR: A set of experience knowledge structure (SOEKS) is a combination of organized information obtained from a formal decision event that stores and administers experience from the day-to-day decision processes to improve decision-making quality and efficiency.
Abstract: When managers make decisions, they use previous, similar, or equal experiences to help themselves in a new decision-making situation. Thus, keeping record of previous decision events appears to be of the utmost importance as part of the decision making process. For us, every formal decision event has to be collected and stored as experienced knowledge, and any technology able to do this will allow us to improve the decision-making process by reducing decision time, as well as by avoiding duplication in the process. However, one of the most complicated issues about knowledge is its representation. Developing a knowledge structure that stores and administers experience from the day-to-day decision processes would improve decision-making quality and efficiency. We are proposing such a knowledge structure and have named it set of experience knowledge structure. A set of experience knowledge structure (SOEKS) is a combination of organized information obtained from a formal decision event. Fully applied, the set of experience knowledge structure would advance the notion of administering knowledge in the current decision-making environment.

79 citations


Journal ArticleDOI
TL;DR: This article presents a consensus-based approach for determining collective knowledge using its inconsistency in a collective whose members have different knowledge states about the same real world.
Abstract: If knowledge about some matter is gathered from different sources then its inconsistency may appear. For a collective whose members have different knowledge states about the same real world and if they realize a common task, then the common state of knowledge of the collective as a whole needs to be determined. This article presents a consensus-based approach for determining collective knowledge using its inconsistency.

57 citations


Journal ArticleDOI
TL;DR: An efficient hybrid algorithm is proposed based on the combination of discrete particle swarm optimization, ant colony optimization, and fuzzy multi-objective approach called DPSO-ACO-F to reduce real power losses, deviation of nodes voltage, and the number of switching operations on the feeders.
Abstract: This article proposes an efficient hybrid algorithm for multi-objective distribution feeder reconfiguration. The hybrid algorithm is based on the combination of discrete particle swarm optimization (DPSO), ant colony optimization (ACO), and fuzzy multi-objective approach called DPSO-ACO-F. The objective functions are to reduce real power losses, deviation of nodes voltage, the number of switching operations, and the balancing of the loads on the feeders. Since the objectives are not the same, it is not easy to solve the problem by traditional approaches that optimize a single objective. In the proposed algorithm, the objective functions are first modeled with fuzzy sets to calculate their imprecise nature and then the hybrid evolutionary algorithm is applied to determine the optimal solution. The feasibility of the proposed optimization algorithm is demonstrated and compared with the solutions obtained by other approaches over different distribution test systems.

42 citations


Journal ArticleDOI
TL;DR: A hybrid method to integrate multiple ontologies in several levels, such as the element level, internal structure, and relational structure, is described.
Abstract: While there has been a variety of research focusing on ontology integration based on simple techniques (e.g., element-or structure-level techniques), the hybrid approaches combining the simple techniques have not been explored. In this paper we describe a hybrid method to integrate multiple ontologies in several levels, such as the element level, internal structure, and relational structure. A semantic supporting environment (SSE) combining special domains (e.g., WordNet) and text corpus are defined in the proposed approach. An enriched ontology model (EOM) has been proposed to reduce the initial complexity of the process of ontology integration. Subsequently, the semantic network called OnConceptSNet is provided. The relations between the concepts in the OnConceptSNet are derived from the SSE. An enhanced algorithm (EA) has been proposed to enhance OnConceptSNet.

32 citations


Journal ArticleDOI
TL;DR: A model of complex services composed of atomic services available in different versions and offered in a heterogeneous environment is presented to provide a framework to analyze access limitations, services costs, security, and resource constraints in distributed, networked systems.
Abstract: Service-based approaches have significantly extended client server applications and allow to assemble a variety of components, services, and systems into multi-tier applications, flexibly supporting increasing business needs and requirements. Deployment of applications, composed of application components and services available in a distributed, heterogeneous environment, highly increased the complexity of systems offering complete, collaborative, continuous, and constraint-free solutions. The aim of this article is to present a model of complex services composed of atomic services available in different versions and offered in a heterogeneous environment. The purpose of the introduced model is to provide a framework to analyze access limitations, services costs, security, and resource constraints, i.e., tasks that must be proactively monitored and resolved in distributed, networked systems. The introduced model is applied to formulate a complex services selection and optimization tasks taking into account...

31 citations


Journal ArticleDOI
TL;DR: A Hue-Saturation-Intensity (HSI) color model is adopted to select automatically statistical threshold value for detecting candidate regions when vehicle bodies and license plates (LP) have similar color.
Abstract: Detecting license plates is crucial and inevitable in the vehicle license plate recognition system. In this article, a Hue-Saturation-Intensity (HSI) color model is adopted to select automatically statistical threshold value for detecting candidate regions. The focus of this article is on the implementation of a new method to detect candidate regions when vehicle bodies and license plates (LP) have similar color. The proposed method is able to deal with candidate regions under independent orientation and scale of the plate. For the decomposing candidate regions, predetermined LP alphanumeric characters are used by position histogram to verify and detect vehicle LP regions. Various LP images were used with a variety of conditions to test the proposed method and results proved its effectiveness.

28 citations


Journal ArticleDOI
TL;DR: This paper proposes a semiautomatic approach to build meaningful social networks by repeating interactions with human experts and applies the proposed system to discover the social networks among mobile subscribers.
Abstract: It is difficult to be aware of the personal context for providing a mobile recommendation, because each person's activities and preferences are ambiguous and depend upon numerous unknown factors. In order to solve this problem, we have focused on a reality mining to discover social relationships (e.g., family, friends, etc.) between people in the real world. We have assumed that the personal context for any given person is interrelated with those of other people, and we have investigated how to take into account a person's neighbor's contexts, which possibly have an important influence on his or her personal context. This requires that given a dataset, we have to discover the hidden social networks which express the contextual dependencies between people. In this paper, we propose a semiautomatic approach to build meaningful social networks by repeating interactions with human experts. In this research project, we have applied the proposed system to discover the social networks among mobile subscribers. We have collected and analyzed a dataset of approximately two million people.

25 citations


Journal ArticleDOI
TL;DR: The authors suggest the introduction of the “Universal Dialectical Systems Theory” (UDST) as a common denominator of the values and methods of the requisitely holistic observation, perception, thinking, emotional and spiritual life, decision-making, and action by interdisciplinary creative co-operation and information, as a means to link natural and social sciences.
Abstract: The authors see interesting parallels between the development phases of physics (as a basic natural science), management/organization (as a science on orchestrating organizational parts for goal-focused leading and working in line with the law of requisite holism and resulting success), and systems theory (as a science on holism as a world view and methods to attain it) from determinism and division to realistic indeterminism by integration using information. Similar phases can be seen in Adam Smith's theory of economics. The authors suggest the introduction of the “Universal Dialectical Systems Theory” (UDST) as a common denominator of the values and methods of the requisitely holistic observation, perception, thinking, emotional and spiritual life, decision-making, and action by interdisciplinary creative co-operation and information, as a means to link natural and social sciences.

22 citations


Journal ArticleDOI
TL;DR: A detailed comparative study on some of the solution methods for the traveling salesman problem using genetic algorithms and cannot conclude how the behavior of those algorithms depends on network topology.
Abstract: Despite the existence of a number of variations of genetic algorithms for the traveling salesman problem in literature, no efforts have been made to the best of our knowledge to characterize their performance This paper presents a detailed comparative study on some of the solution methods for the traveling salesman problem using genetic algorithms All the operators of genetic algorithms have been given equal emphasis in the analysis The complete simulation has been done using a number of C programs written by us for this purpose A permanent network has been particularly considered for this task The results shed insight on the best solution method for the problem However, we cannot conclude how the behavior of those algorithms depends on network topology

21 citations


Journal ArticleDOI
Tao Gao1, Zheng-Guang Liu1, Shi-hong Yue1, Jian-qiang Mei1, Jun Zhang1 
TL;DR: An automatic particle filtering algorithm is used to track the vehicle after detection and obtaining the center of the object, and an actual road test shows that the algorithm can effectively remove the influence of pedestrians and cyclists in the complex environment, and can track the moving vehicle exactly.
Abstract: Moving vehicle detection and tracking is the key technology in the intelligent traffic monitoring system. For the shortcomings and deficiencies of the frame-subtraction method, a novel Marr wavelet, kernel-based background modeling method and a background subtraction method based on binary discrete wavelet transforms (BDWT) are introduced. The background model keeps a sample of intensity values for each pixel in the image and uses this sample to estimate the probability density function of the pixel intensity. The density function is estimated using a new Marr wavelet kernel density estimation technique. The background and current frame are transformed by BDWT, and moving vehicles are detected in the binary discrete wavelet transforms domain. For the shortages of RGB (Red, Green, Blue) or HSV (Hue, Saturation, Value) color space-based vehicle shadow segmentation algorithms, shadow segmentation algorithm based on YCbCr color space and edge detection is proposed. The original data of the shadow according to...

Journal ArticleDOI
TL;DR: Overall results show that genetic algorithms generally can find better solutions compared to the PSO algorithm, but in terms of average generation it is not good enough.
Abstract: This article deals with a performance evaluation of particle swarm optimization (PSO) and genetic algorithms (GA) for traveling salesman problem (TSP). This problem is known to be NP-hard, and consists of the solution containing N! permutations. The objective of the study is to compare the ability to solve the large-scale and other benchmark problems for both algorithms. All simulation has been performed using a software program developed in the Delphi environment. As yet, overall results show that genetic algorithms generally can find better solutions compared to the PSO algorithm, but in terms of average generation it is not good enough.

Journal ArticleDOI
TL;DR: A novel forecasting model is proposed that overcomes the major hurdle of determining the k-order in high-order models and is enhanced to allow the handling of multi-factor forecasting problems by removing the overhead of deriving all fuzzy logic relationships beforehand.
Abstract: The study of fuzzy time series has attracted great interest and is expected to expand rapidly. Various forecasting models including high-order models have been proposed to improve forecasting accuracy or reducing computational cost. However, there exist two important issues, namely, rule redundancy and high-order redundancy that have not yet been investigated. This article proposes a novel forecasting model to tackle such issues. It overcomes the major hurdle of determining the k-order in high-order models and is enhanced to allow the handling of multi-factor forecasting problems by removing the overhead of deriving all fuzzy logic relationships beforehand. Two novel performance evaluation metrics are also formally derived for comparing performances of related forecasting models. Experimental results demonstrate that the proposed forecasting model outperforms the existing models in efficiency.

Journal ArticleDOI
TL;DR: A practical fuzzy linguistic approach is incorporated into the previous digraph model in this article and the extended fuzzyDigraph model is explained in detail through an example in the present article.
Abstract: There are many approaches in the literature to model and quantify manufacturing flexibility. Most of these models were developed to quantify several aspects of manufacturing flexibility like machine flexibility, routing flexibility, mix flexibility, volume flexibility, etc. This is mainly due to the fact that developing a generic model, which can be used to measure different types of flexibilities, is not straightforward. Recently, a generic flexibility measure, which is based on digraphs and permanent index, was proposed by the author. The main difficulty with that model like in all other flexibility models is the inability to collect precise data for computing the flexibility. In order to overcome this difficulty, a practical fuzzy linguistic approach is incorporated into the previous digraph model in this article. The extended fuzzy digraph model is explained in detail through an example in the present article.

Journal ArticleDOI
TL;DR: An attribute clustering method based on genetic algorithms is proposed for feature selection and feature replacement that combines both the average accuracy of attribute substitution in clusters and the cluster balance as the fitness function.
Abstract: Feature selection is an important preprocessing step in mining and learning. A good set of features cannot only improve the accuracy of classification, but can also reduce the time to derive rules. It is executed especially when the amount of attributes in a given training data is very large. In this article, an attribute clustering method based on genetic algorithms is proposed for feature selection and feature replacement. It combines both the average accuracy of attribute substitution in clusters and the cluster balance as the fitness function. Experimental comparison with the k-means clustering approach and all combinations of attributes also shows the proposed approach can get a good trade-off between accuracy and time complexity. Besides, after feature selection, the rules derived from only the selected features may usually be hard to use if some values of the selected features cannot be obtained in current environments. This problem can be easily solved in our proposed approach. The attributes with...

Journal ArticleDOI
TL;DR: A structure that includes unsupervised and supervised learning methods for extracting knowledge from blogs, namely, a blog mining (BM) model is presented and a real case regarding VoIP (Voice over Internet Protocol) phone products is provided to demonstrate the effectiveness of the proposed method.
Abstract: Blogs have been considered the 4th Internet application that can cause radical changes in the world, after e-mail, instant messaging, and Bulletin Board System (BBS). Many Internet users rely heavily on them to express their emotions and personal comments on whatever topics interest them. Nowadays, blogs have become the popular media and could be viewed as new marketing channels. Depending on the blog search engine, Technorati, we tracked about 94 million blogs in August 2007. It also reported that a whole new blog is created every 7.4 seconds and 275,000 blogs are updated daily. These figures can be used to illustrate the reason why more and more companies attempt to discover useful knowledge from this vast number of blogs for business purposes. Therefore, blog mining could be a new trend of web mining. The major objective of this study is to present a structure that includes unsupervised (self-organizing map) and supervised learning methods (back-propagation neural networks, decision tree, and support v...

Journal ArticleDOI
TL;DR: This article presents a system that provides the functional biological equivalent which consists of a pair of cameras that provide for saccade and vergence eye movements and is able to selectively reconstruct the 3D relative positions of objects in the scene and segment the image of the object under vergence.
Abstract: The human naturally possesses a robust and effective visual system that utilizes saccade and vergence eye movements to explore the visual environment. This article presents a system that provides the functional biological equivalent which consists of a pair of cameras that provide for saccade and vergence eye movements. Included in this article is a detailed description of a simplified equivalent of the saccade generation module (typically from the superior colliculus (SC)) based on a FLANN image segmentation method and a visual cortex (VC) equivalent model based on a hierarchical disparity estimation model for vergence control. These two models cooperate to provide the systematic means for the autonomous exploration of the scene. Combining saccade and vergence movements, we are able to selectively reconstruct the 3D relative positions of objects in the scene and segment the image of the object under vergence.

Journal ArticleDOI
TL;DR: It is shown that the data mining algorithms return quite accurate prediction results, and the best results are achieved using the IBM's transform regression algorithm.
Abstract: This paper presents the application of data mining algorithms to the prediction of Web performance. Our domain-driven data mining uses historic HTTP transactions data reflecting Web performance as perceived by the end-users located in the Internet domain of Wroclaw University of Technology, Wroclaw, Poland. The predictive modeling features of two general data mining systems, Microsoft SQL Server and IBM Intelligent Miner, are compared. The neural networks, decision tree, time series, and transform regression models are evaluated. It is shown that the data mining algorithms return quite accurate prediction results. The best results are achieved using the IBM's transform regression algorithm.

Journal ArticleDOI
TL;DR: A structured way of documenting agent-based simulation models is presented by presenting a documentation framework that consists of six different categories of model information: metadata, informal model characterization, model contents, expected simulation behavior, experimental frame, and passed tests.
Abstract: Sufficient and appropriate documentation of a simulation model forms an essential prerequisite for quality assessment as well as for activities of maintenance, reuse, or reproduction of the model and its results. This is true for every simulation paradigm. However, in particular for agent-based simulations with their high degree of freedom in design, their usually complex behavior and interactions, their high level of detail and heterogeneity, etc., documentation becomes indispensable, and also problematic. This article contributes to general advancement of the methodological basis of agent-based simulations by presenting a structured way of documenting agent-based simulation models. We propose a documentation framework that consists of six different categories of model information: metadata, informal model characterization, model contents, expected simulation behavior, experimental frame, and passed tests.

Journal ArticleDOI
TL;DR: This is the first hybrid application of genetic algorithm (GA) and variable neighborhood search (VNS) for the open shop scheduling problem and 12 new hard, large-scale open shop benchmark instances are proposed that simulate realistic industrial cases.
Abstract: In this article, a hybrid metaheuristic method for solving the open shop scheduling problem (OSSP) is proposed. The optimization criterion is the minimization of makespan and the solution method consists of four components: a randomized initial population generation, a heuristic solution included in the initial population acquired by a Nawaz-Enscore-Ham (NEH)-based heuristic for the flow shop scheduling problem, and two interconnected metaheuristic algorithms: a variable neighborhood search and a genetic algorithm. To our knowledge, this is the first hybrid application of genetic algorithm (GA) and variable neighborhood search (VNS) for the open shop scheduling problem. Computational experiments on benchmark data sets demonstrate that the proposed hybrid metaheuristic reaches a high quality solution in short computational times. Moreover, 12 new hard, large-scale open shop benchmark instances are proposed that simulate realistic industrial cases.

Journal ArticleDOI
TL;DR: MFS-based features that are faster to compute than Fourier descriptors and have fewer errors utilizing the dots and holes features of Arabic characters are introduced.
Abstract: Fast Fourier transform (FFT) is used successfully in computing the Fourier descriptors which are used in object and character recognition. In this article, an Arabic character recognition algorithm using modified fourier spectrum (MFS) is presented. Ten descriptors are estimated from the Fourier spectrum of the character contour by subtracting the imaginary part from the real part (and not from the amplitude of the Fourier spectrum as is usually the case). Ten MFS descriptors are extracted and used for the recognition of Arabic characters. Experimental results using 10 MFS descriptors resulted in an average recognition rate of 95.9%. The analysis of the sparse matrix indicates that the major part of the errors is due to few similar characters. The new technique, based on MFS descriptors, was compared with the Fourier descriptors calculated from the amplitude of the FFT spectrum. Experimental results have shown that the MFS-based technique is faster to compute than the FFT-based technique. However, the Fourier descriptors, initially, have a better recognition rate than MFS descriptors (96.9% vs. 95.9%). Using the holes' and dots' features to resolve the problematic characters reduces the error rate of the MFS technique more than that of the Fourier descriptor technique. This article introduced MFS-based features that are faster to compute than Fourier descriptors and have fewer errors utilizing the dots and holes features of Arabic characters. Both techniques may be used in combination or in a multi-classifier system to enhance the Arabic recognition system rate.

Journal ArticleDOI
TL;DR: This article attempts to propose a simple fuzzy learning algorithm to get a positive result in the relational database estimation on the real world database system, including partition determination, automatic membership function, and rule generation, and system approximation.
Abstract: There are many methods trying to do relational database estimations with a highly estimated accuracy rate by constructing a fuzzy learning algorithm automatically. However, there exists a conflict between the degree of the interpretability and the accuracy of the approximation in a general fuzzy system. Thus, how to make the best compromise between the accuracy of the approximation and the degree of the interpretability is a significant study of the subject. In order to achieve the best compromise, this article attempts to propose a simple fuzzy learning algorithm to get a positive result in the relational database estimation on the real world database system, including partition determination, automatic membership function, and rule generation, and system approximation.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear immune feedback controller is designed based on the T-B cells feedback principle in biological immune responses, and an analytical numerical solution to the Popov theorem for the SHST immune system is developed avoiding the inconvenience of conventional graphical solutions.
Abstract: A problem of designing an immune feedback controller (IFC) is addressed here. Based on the T-B cells feedback principle in biological immune responses, the nonlinear immune controller is designed. Concerning the superheated-steam temperature (SHST) control in power plants, we extend numerical solutions to the bounded-input-bounded-output (BIBO) stability based on the small-gain theorem, and Popov theorem, respectively. Also, an analytical numerical solution to the Popov theorem for the SHST immune system is developed avoiding the inconvenience of conventional graphical solutions. Furthermore, simulations for the system performances and the comparison of the stabilization region gained by the small-gain theorem and Popov criteria are extended. The system simulation results are satisfying and prove the conclusion that Popov criterion behaves better than small-gain theorem for this system.

Journal ArticleDOI
TL;DR: This paper presents a new method for estimating the null values in relational database systems having negative dependency relationships between attributes, and applies the automatic clustering algorithm presented in Chang and Chen (2006) for clustering the tuples in the relational database.
Abstract: In recent years, some methods have been proposed to estimate the null values in relational database systems (Chang and Chen 2006; Chen and Chang 2008; Chen and Hsiao 2005; Chen and Huang 2003, 2008; Chen and Lee 2005; Cheng and Wang 2006). In this paper, we present a new method for estimating the null values in relational database systems having negative dependency relationships between attributes. First, we apply the automatic clustering algorithm we presented in Chang and Chen (2006) for clustering the tuples in the relational database. Based on the clustering results and multiple regression techniques (Kvanli et al. 1986), we present a new method for estimate null values in relational database systems having negative dependency relationships between attributes, where the “Benz secondhand car database” (Chen and Huang 2008) is used for the experiment. The experimental results show that the proposed method gets a higher average estimated accuracy rate than Chen and Huang's method (2003, 2008) for estimating null values in relational database systems having negative dependency relationships between attributes.

Journal ArticleDOI
TL;DR: This research presents the novel use of the fastest policy hill-climbing methods of Win or Lose Fast with policy-sharing with the proposed cooperative learning method, and demonstrates that agents can learn to accomplish a task together efficiently through repetitive trials.
Abstract: Reinforcement learning is one of the more prominent machine-learning technologies due to its unsupervised learning structure and ability to continually learn, even in a dynamic operating environment. Applying this learning to cooperative multi-agent systems not only allows each individual agent to learn from its own experience, but also offers the opportunity for the individual agents to learn from the other agents in the system so the speed of learning can be accelerated. In the proposed learning algorithm, an agent adapts to comply with its peers by learning carefully when it obtains a positive reinforcement feedback signal, but should learn more aggressively if a negative reward follows the action just taken. These two properties are applied to develop the proposed cooperative learning method. This research presents the novel use of the fastest policy hill-climbing methods of Win or Lose Fast (WoLF) with policy-sharing. Results from the multi-agent cooperative domain illustrate that the proposed algorithms perform better than Q-learning alone in a piano mover environment. It also demonstrates that agents can learn to accomplish a task together efficiently through repetitive trials.

Journal ArticleDOI
TL;DR: The main role that space windowing plays in preliminary knowledge extraction from multifactor and multivariate databases coming from complex system empirical studies is explained and several graphic techniques can be exploited to investigate membership values.
Abstract: This article explains the main role that space windowing plays in preliminary knowledge extraction from multifactor and multivariate databases coming from complex system empirical studies. The explanation is based on the general case of a database with a hyperparallelepipedic structure in which the directions correspond to the factors and where the measurement variables may be quantitative or qualitative, temporal or nontemporal, and objective or subjective. First, the data in each cell of the hyperparallelepiped is transformed into membership values that can be averaged over factors, such as time or individual. Then, several graphic techniques can be exploited to investigate membership values. This article mainly focuses on the use of multiple correspondence analysis (MCA). A didactic example with several factors and several kinds of variables—nontemporal vs. temporal where each one may be either quantitative or qualitative—is used to illustrate the widespread use of the pair “space windowing/MCA.” The d...

Journal ArticleDOI
TL;DR: The Celada-Seiden model is analyzed and extended, a simple and elegant agent-based model of the entire immune response, which, however, lacks biophysically sound simulation methodology, and is extended to a stochastic-deterministic hybrid.
Abstract: The immune system is of central interest for the life sciences, but its high complexity makes it a challenging system to study. Computational models of the immune system can help to improve our understanding of its fundamental principles. In this article, we analyze and extend the Celada-Seiden model, a simple and elegant agent-based model of the entire immune response, which, however, lacks biophysically sound simulation methodology. We extend the stochastic model to a stochastic-deterministic hybrid, and link the deterministic version to continuous physical and chemical laws. This gives precise meaning to all simulation processes, and helps to increase performance. To demonstrate an application for the model, we implement and study two different hypotheses about T cell-mediated immune memory.

Journal ArticleDOI
TL;DR: This article presents a framework that was used for the implementation of a decisional trust system using elements such as the decisional DNA, reflexive ontologies, and security models.
Abstract: In this article, we introduce the necessary elements that must be integrated in order to achieve a decisional technology that is trustworthy. Thus, we refer to such technology as decisional trust. For us, decisional trust can be achieved through the use of elements such as the decisional DNA, reflexive ontologies, and security models; and therefore, we present in this article a framework that was used for the implementation of a decisional trust system.

Journal ArticleDOI
TL;DR: The present collection is the result of a thorough, two-step review process applied to a selection of the contributions published in the Proceedings of the 19th European Meeting on Cybernetics and Systems Research (Trappl 2008) and discussed at this conference in the symposium on ‘‘Agent-Based Modeling and Simulation’’ (ABModSim-2).
Abstract: The present collection is the result of a thorough, two-step review process applied to a selection of the contributions published in the Proceedings of the 19th European Meeting on Cybernetics and Systems Research (Trappl 2008) and discussed at this conference in the symposium on ‘‘Agent-Based Modeling and Simulation’’ (ABModSim-2). The variety of topics and domains addressed is symptomatic for the pervasive adoption of the term ‘‘agent’’ across an ever-growing number of disciplines, research, and application areas, which range from the social sciences to urban planning; biology; logistics; production planning; and many more. These differing disciplines and application domains almost inevitably entail differences regarding the concepts underlying the notion of an agent, just as in the goals of the modeling and simulation activities. Such heterogeneity has naturally led to the definition and development of different approaches,

Journal ArticleDOI
TL;DR: The role of knowledge management tools, technologies, and approaches in the process of dynamic development of Polish nongovernmental organization (NGO) sector is examined and a systemic model solution is proposed with an example of implementation supporting the enhancement of NGO operations and performance.
Abstract: Knowledge-based systems, knowledge technologies, and knowledge management are the areas that have been gaining significant attention and importance in the recent process of the knowledge-based economy creation. The above relates not only to the national economy as a whole (macroeconomic level) but also to particular economy sectors and individual organizations. This article focuses on the role of knowledge management tools, technologies, and approaches in the process of dynamic development of Polish nongovernmental organization (NGO) sector. First, the current state of this role is examined and then a systemic model solution is proposed with an example of implementation supporting the enhancement of NGO operations and performance. The article concludes with the outline of further research in this area.