scispace - formally typeset
Search or ask a question

Showing papers in "Complex Adaptive Systems Modeling in 2014"


Journal ArticleDOI
TL;DR: The results suggest that Thompson sampling might not merely be a useful heuristic, but a principled method to address problems of adaptive sequential decision-making and causal inference.
Abstract: Sampling an action according to the probability that the action is believed to be the optimal one is sometimes called Thompson sampling. Although mostly applied to bandit problems, Thompson sampling can also be used to solve sequential adaptive control problems, when the optimal policy is known for each possible environment. The predictive distribution over actions can then be constructed by a Bayesian superposition of the policies weighted by their posterior probability of being optimal. Here we discuss two important features of this approach. First, we show in how far such generalized Thompson sampling can be regarded as an optimal strategy under limited information processing capabilities that constrain the sampling complexity of the decision-making process. Second, we show how such Thompson sampling can be extended to solve causal inference problems when interacting with an environment in a sequential fashion. In summary, our results suggest that Thompson sampling might not merely be a useful heuristic, but a principled method to address problems of adaptive sequential decision-making and causal inference.

52 citations


Journal ArticleDOI
TL;DR: This in-depth analysis of various Cloud based IDMSs reveals that most of the systems do not offer support to all the essential features of Cloud IDMS and the ones that do, have their own certain weaknesses.
Abstract: Cloud computing systems represent one of the most complex computing systems currently in existence. Current applications of Cloud involve extensive use of distributed systems with varying degree of connectivity and usage. With a recent focus on large-scale proliferation of Cloud computing, identity management in Cloud based systems is a critical issue for the sustainability of any Cloud-based service. This area has also received considerable attention from the research community as well as the IT industry. Numerous Cloud Identity Management Systems (IDMSs) have been proposed so far; however, most of those systems are neither widely accepted nor considered highly reliable due to their constraints in terms of scope, applicability and security. In order to achieve reliability and effectiveness in IDMs for Cloud, further extensive research needs to be carried out to critically examine Cloud based IDMSs and their level of security. In this work, we have holistically analyzed Cloud IDMSs to better understand the general as well as the security aspects of this domain. From the security perspective, we present a comprehensive list of attacks that occur frequently in Cloud based IDMSs. In order to alleviate those attacks, we present a well-organized taxonomy tree covering the most desired features essential for any Cloud-based IDMSs. Additionally, we have specified various mechanisms of realization (such as access control polices, encryption, self-service) against each of the features of Cloud IDMSs. We have further used the proposed taxonomy as an assessment criterion for the evaluation of Cloud based IDMSs. Our in-depth analysis of various Cloud based IDMSs reveals that most of the systems do not offer support to all the essential features of Cloud IDMS and the ones that do, have their own certain weaknesses. None of the discussed techniques heuristically covers all the security features; moreover, they lack compliance to international standards which, understandably, undermines their credibility. Presented work will help Cloud subscribers and providers in understanding the available solutions as well as the involved risks, allowing them to make more knowledgeable decisions while selecting potential Cloud IDMSs that best suits their functional and security requirements.

48 citations


Journal ArticleDOI
TL;DR: It is proven that the asymptotic time and space performance of modular imperative agent-based modeling studies is computationally optimal for a common class of problems, and it follows that this kind of modeling is the best modeling method for such problems.
Abstract: Following Holland, complex adaptive systems (CASs) are collections of interacting, autonomous, learning decision makers embedded in an interactive environment. Modeling CASs is challenging for a variety of reasons including the presence of heterogeneity, spatial relationships, nonlinearity, and, of course, adaptation. The challenges of modeling CASs can largely be overcome by using the individual-level focus of agent-based modeling. Agent-based modeling has been used successfully to model CASs in many disciplines. Many of these models were implemented using agent-based modeling software such as Swarm, Repast 3, Repast Simphony, Repast for High-Performance Computing, MASON, NetLogo, or StarLogo. All of these options use modular imperative architectures with factored agents, spaces, a scheduler, logs, and an interface. Many custom agent-based models also use this kind of architecture. This paper’s contribution is to introduce and apply a theoretical formalism for analyzing modular imperative agent-based models of CASs. This paper includes an analysis of three example models to show how the formalism is useful for predicting the execution time and space requirements for representations of common CASs. The paper details the formalism and then uses it to prove several new findings about modular imperative agent-based models. It is proven that the asymptotic time and space performance of modular imperative agent-based modeling studies is computationally optimal for a common class of problems. Here ‘optimal’ means that no other technique can solve the same problem computationally using less asymptotic time or space. Modular imperative agent-based models are shown to be universal models, subject to the correctness of the Church-Turing thesis. Several other results are also proven about the time and space performance of modular imperative agent-based models. The formalism is then used to predict the performance of three models and the results are found to compare closely to the measured performance. This paper’s contribution is to introduce, analyze, and apply a theoretical formalism for proving findings about agent-based models with modular agent scheduler architectures. Given that this kind of modeling is both computationally optimal and a natural structural match for many modeling problems, it follows that it is the best modeling method for such problems.

30 citations


Journal ArticleDOI
TL;DR: A traceability model inspired by wavepattern recognition models to detect “zero-patient” areas based on outbreak spread is proposed and is extremely useful for the optimization of surveillance networks to drastically reduce the burden of food-borne and other infectious diseases.
Abstract: Purpose Infectious diseases are the second leading cause of deaths worldwide, accounting for 15 million deaths – that is more than 25% of all deaths – each year. Food plays a crucial role, contributing to 1.5 million deaths, most of which are children, through foodborne diarrheal disease alone. Thus, the ability to timely detect outbreak pathways via high-efficiency surveillance system is essential to the physical and social well being of populations. For this purpose, a traceability model inspired by wavepattern recognition models to detect “zero-patient” areas based on outbreak spread is proposed.

9 citations


Journal ArticleDOI
TL;DR: In the definition of the weighted sum ni of input activations aj from nodes j to node i across links with weights wij, the direction of the inequality was reversed.
Abstract: Correspondence: mabrams@uab.edu Department of Philosophy, University of Alabama at Birmingham 900 13th Street South, HB 414A Birmingham, AL 35294-1260, USA Correction After publication of (Abrams 2013), I discovered errors in two formulas in the settle-nets paragraphs of the POPCO main loop section in Methods (page 13 in the PDF file). I correct these errors here. In the definition of the weighted sum ni of input activations aj from nodes j to node i across links with weights wij, the direction of the inequality was reversed. The definition of ni should read: ni = ∑

4 citations