Showing papers in "Journal of Computational Science in 2014"
TL;DR: Chaos is introduced into Bat algorithm so as to increase its global search mobility for robust global optimization and results show that some variants of chaotic BAs can clearly outperform the standard BA for these benchmarks.
Abstract: Bat algorithm (BA) is a recent metaheuristic optimization algorithm proposed by Yang. In the present study, we have introduced chaos into BA so as to increase its global search mobility for robust global optimization. Detailed studies have been carried out on benchmark problems with different chaotic maps. Here, four different variants of chaotic BA are introduced and thirteen different chaotic maps are utilized for validating each of these four variants. The results show that some variants of chaotic BAs can clearly outperform the standard BA for these benchmarks.
TL;DR: In this article, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is applied to optimize weights which are used in a linear combination of sixteen neighborhood and node similarity indices.
Abstract: Many real world, complex phenomena have underlying structures of evolving networks where nodes and links are added and removed over time. A central scientific challenge is the description and explanation of network dynamics, with a key test being the prediction of short and long term changes. For the problem of short-term link prediction, existing methods attempt to determine neighborhood metrics that correlate with the appearance of a link in the next observation period. Recent work has suggested that the incorporation of topological features and node attributes can improve link prediction. We provide an approach to predicting future links by applying the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to optimize weights which are used in a linear combination of sixteen neighborhood and node similarity indices. We examine a large dynamic social network with over 106 nodes (Twitter reciprocal reply networks), both as a test of our general method and as a problem of scientific interest in itself. Our method exhibits fast convergence and high levels of precision for the top twenty predicted links. Based on our findings, we suggest possible factors which may be driving the evolution of Twitter reciprocal reply networks.
TL;DR: OPUS-RBF is compared with a standard PSO, CMA-ES, two other surrogate-assisted PSO algorithms, and an RBF-assisted evolution strategy and numerical results suggest that OPUS- RBF is promising for expensive black-box optimization.
Abstract: This paper develops the OPUS (Optimization by Particle swarm Using Surrogates) framework for expensive black-box optimization. In each iteration, OPUS considers multiple trial positions for each particle in the swarm and uses a surrogate model to identify the most promising trial position. Moreover, the current overall best position is refined by finding the global minimum of the surrogate in the neighborhood of that position. OPUS is implemented using an RBF surrogate and the resulting OPUS-RBF algorithm is applied to a 36-D groundwater bioremediation problem, a 14-D watershed calibration problem, and ten mostly 30-D test problems. OPUS-RBF is compared with a standard PSO, CMA-ES, two other surrogate-assisted PSO algorithms, and an RBF-assisted evolution strategy. The numerical results suggest that OPUS-RBF is promising for expensive black-box optimization.
TL;DR: The transmedia learning paradigm is introduced as offering more effective use of serious games for training and education to Soldiers across multiple media, consistent with the goals of international organizations implementing approaches similar to those described by the Army Learning Model.
Abstract: Serious games present a relatively new approach to training and education for international organizations such as NATO (North Atlantic Treaty Organization), non-governmental organizations (NGOs), the U.S. Department of Defense (DoD) and the U.S. Department of Homeland Security (DHS). Although serious games are often deployed as stand-alone solutions, they can also serve as entry points into a comprehensive training pipeline in which content is delivered via different media to rapidly scale immersive training and education for mass audiences. The present paper introduces a new paradigm for more effective and scalable training and education called transmedia learning. Transmedia learning leverages several new media trends including the peer communications of social media, the scalability of massively openonline course (MOOCs), and the design of transmedia storytelling used by entertainment, advertising, and commercial game industries to sustain audience engagement. Transmedia learning is defined as the scalable system of messages representing a narrative or core experience that unfolds from the use of multiple media, emotionally engaging learners by involving them personally in the story. In the present paper, we introduce the transmedia learning paradigm as offering more effective use of serious games for training and education. This approach is consistent with the goals of international organizations implementing approaches similar to those described by the Army Learning Model (ALM) to deliver training and education to Soldiers across multiple media. We discuss why the human brain is wired for transmedia learning and demonstrate how the Simulation Experience Design Method can be used to create transmedia learning story worlds for serious games. We describe how social media interactions and MOOCs may be used in transmedia learning, and how data mining social media and experience tracking can inform the development of computational learner models for transmedia learning campaigns. Examples of how the U.S. Army has utilized transmedia campaigns for strategic communication and game-based training are provided. Finally, we provide strategies the reader can use today to incorporate transmedia storytelling elements such as Internet, serious games, video, social media, graphic novels, machinima, blogs, and alternate reality gaming into a new paradigm for training and education: transmedia learning.
TL;DR: This paper describes two e-science infrastructures: Science and Engineering Applications Grid (SEAGrid) and molecular modeling and parametrization (ParamChem), which share a similar three-tier computational infrastructure that consists of a front-end client, a middleware web services layer, and a remote HPC computational layer.
Abstract: E-science infrastructures are becoming the essential tools for computational scientific research. In this paper, we describe two e-science infrastructures: Science and Engineering Applications Grid (SEAGrid) and molecular modeling and parametrization (ParamChem). The SEAGrid is a virtual organization with a diverse set of hardware and software resources and provides services to access such resources in a routine and transparent manner. These essential services include allocations of computational resources, client-side application interfaces, computational job and data management tools, and consulting activities. ParamChem is another e-science project dedicated for molecular force-field parametrization based on both ab-initio and molecular mechanics calculations on high performance computers (HPCs) driven by scientific workflow middleware services. Both the projects share a similar three-tier computational infrastructure that consists of a front-end client, a middleware web services layer, and a remote HPC computational layer. The client is a Java Swing desktop application with components for pre- and post-data processing, communications with middleware server and local data management. The middleware service is based on Axis2 web service and MySQL relational database, which provides functionalities for user authentication and session control, HPC resource information collections, discovery and matching, job information logging and notification. It can also be integrated with scientific workflow to manage computations on HPC resources. The grid credentials for accessing HPCs are delegated through MyProxy infrastructure. Currently SEAGrid has integrated several popular application software suites such as Gaussian for quantum chemistry, NAMD for molecular dynamics and engineering software such as Abacus for mechanical engineering. ParamChem has integrated CGenFF (CHARMM General Force-Field) for molecular force-field parametrization of drug-like molecules. Long-term storage of user data is handled by tertiary data archival mechanisms. SEAGrid science gateway serves more than 500 users while more than 1000 users use ParamChem services such as atom typing and initial force-field parameter guess at present.
TL;DR: Experimental results reveal that the proposed parameter adaptive harmony search algorithm outperforms the existing approaches when applied to 15 benchmark functions and is also employed for data clustering.
Abstract: This paper presents a parameter adaptive harmony search algorithm (PAHS) for solving optimization problems. The two important parameters of harmony search algorithm namely Harmony Memory Consideration Rate (HMCR) and Pitch Adjusting Rate (PAR), which were either kept constant or the PAR value was dynamically changed while still keeping HMCR fixed, as observed from literature, are both being allowed to change dynamically in this proposed PAHS. This change in the parameters has been done to get the global optimal solution. Four different cases of linear and exponential changes have been explored. The change has been allowed during the process of improvization. The proposed algorithm is evaluated on 15 standard benchmark functions of various characteristics. Its performance is investigated and compared with three existing harmony search algorithms. Experimental results reveal that proposed algorithm outperforms the existing approaches when applied to 15 benchmark functions. The effects of scalability, noise, and harmony memory size have also been investigated on four approaches of HS. The proposed algorithm is also employed for data clustering. Five real life datasets selected from UCI machine learning repository are used. The results show that, for data clustering, the proposed algorithm achieved results better than other algorithms.
TL;DR: Both continuous as well as discrete version of informative differential evolution algorithm are used for optimization of relay setting and results are compared with hybrid of genetic algorithm – nonlinear programming and sequential quadratic programming.
Abstract: Growing interconnection in distribution system creates new problem for protection engineers. Particularly the design of overcurrent relay coordination in such system is an independent area of research. With the availability of new artificial based optimization algorithm relay coordination research gain a new momentum. Well established artificial based optimization algorithm such as genetic and particle swam optimization are successfully applied for such applications. This paper discusses the application of informative differential evolution algorithm with self adaptive re-clustering technique for selection of TDS and PSM for optimal coordination of directional overcurrent relays. Both continuous as well as discrete version of informative differential evolution algorithm are used for optimization of relay setting. Proper combination of backup relays for each primary relay are identified by using LINKNET graph theory approach. Coordination of directional overcurrent is developed for 9 bus and IEEE 30 bus distribution systems. The aim of problem is to minimize the total operating time of primary relays and eliminate the miscoordination among the primary and backup relay pairs. Discrete types of settings for electromechanical types of relay are also discussed in this paper. Moreover, the relay coordination problem is modified for providing optimal coordination time interval between 0.2 and 0.8 s among all primary and backup relays pairs. The results are compared with hybrid of genetic algorithm – nonlinear programming and sequential quadratic programming. Digsilient power factory software is used for verification of result.
TL;DR: It is demonstrated that not only relative electron-attracting powers need to be considered, but also relative charge capacities (or polarizabilities), and that other factors can also have significant roles.
Abstract: A σ-hole is a region of diminished electronic density on the extension of a covalent bond to an atom. This region often exhibits a positive electrostatic potential, which allows attractive noncovalent interactions with negative sites. In this study, we have systematically examined the dependence of σ-hole potentials upon (a) the atom having the σ-hole, and (b) the remainder of the molecule. We demonstrate that not only relative electron-attracting powers need to be considered, but also relative charge capacities (or polarizabilities), and that other factors can also have significant roles.
TL;DR: This paper explains the approach to provide a flexible yet scalable simulation environment and elaborates its design principles and implementation details and a comparison to the widely used Lattice Boltzmann solver Palabos.
Abstract: We present the open source Lattice Boltzmann solver Musubi . It is part of the parallel simulation framework APES , which utilizes octrees to represent sparse meshes and provides tools from automatic mesh generation to post-processing. The octree mesh representation enables the handling of arbitrarily complex simulation domains, even on massively parallel systems. Local grid refinement is implemented by several interpolation schemes in Musubi . Various kernels provide different physical models based on stream-collide algorithms. These models can be computed concurrently and can be coupled with each other. This paper explains our approach to provide a flexible yet scalable simulation environment and elaborates its design principles and implementation details. The efficiency of our approach is demonstrated with a performance evaluation on two supercomputers and a comparison to the widely used Lattice Boltzmann solver Palabos .
TL;DR: A new genetic algorithm is developed to find the near global optimal solution of multimodal nonlinear optimization problems, which makes use of a real encoded crossover and mutation operator.
Abstract: In this paper a new genetic algorithm is developed to find the near global optimal solution of multimodal nonlinear optimization problems. The algorithm defined makes use of a real encoded crossover and mutation operator. The performance of GA is tested on a set of twenty-seven nonlinear global optimization test problems of variable difficulty level. Results are compared with some well established popular GAs existing in the literature. It is observed that the algorithm defined performs significantly better than the existing ones.
TL;DR: A novel methodology based on Kriging and expected improvement is proposed for applying robust optimization on unconstrained problems affected by implementation error and performs significantly better than current techniques for robust optimization using response surface modeling.
Abstract: A novel methodology, based on Kriging and expected improvement, is proposed for applying robust optimization on unconstrained problems affected by implementation error. A modified expected improvement measure which reflects the need for robust instead of nominal optimization is used to provide new sampling point locations. A new sample is added at each iteration by finding the location at which the modified expected improvement measure is maximum. By means of this process, the algorithm iteratively progresses towards the robust optimum. It is demonstrated that the algorithm performs significantly better than current techniques for robust optimization using response surface modeling.
TL;DR: The Multiscale Coupling Library and Environment: MUSCLE 2 has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes, and its local and distributed computing capabilities are demonstrated.
Abstract: We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and compare its performance to MUSCLE 1, file copy, MPI, MPWide, and GridFTP. The local throughput of MPI is about two times higher, so very tightly coupled code should use MPI as a single submodel of MUSCLE 2; the distributed performance of GridFTP is lower, especially for small messages. We test the performance of a canal system model with MUSCLE 2, where it introduces an overhead as small as 5% compared to MPI.
TL;DR: A novel hybrid ABC algorithm based on the integrated technique is proposed for tackling the university course timetabling problem using a hill climbing optimizer embedded within the employed bee operator to enhance the local exploitation ability of the original ABC algorithm while tackling the problem.
Abstract: University course timetabling is concerned with assigning a set of courses to a set of rooms and timeslots according to a set of constraints. This problem has been tackled using metaheuristics techniques. Artificial bee colony (ABC) algorithm has been successfully used for tackling uncapaciated examination and course timetabling problems. In this paper, a novel hybrid ABC algorithm based on the integrated technique is proposed for tackling the university course timetabling problem. First of all, initial feasible solutions are generated using the combination of saturation degree (SD) and backtracking algorithm (BA). Secondly, a hill climbing optimizer is embedded within the employed bee operator to enhance the local exploitation ability of the original ABC algorithm while tackling the problem. Hill climbing iteratively navigates the search space of each population member in order to reach a local optima. The proposed hybrid ABC technique is evaluated using the dataset established by Socha including five small, five medium and one large problem instances. Empirical results on these problem instances validate the effectiveness and efficiency of the proposed algorithm. Our work also shows that a well-designed hybrid technique is a competitive alternative for addressing the university course timetabling problem.
TL;DR: Empirical results show the capability of the proposed Swarm Intelligence approach, namely artificial bee colony (ABC), in producing higher prediction accuracy for the prices of interested time series data.
Abstract: The importance of optimizing machine learning control parameters has motivated researchers to investigate for proficient optimization techniques. In this study, a Swarm Intelligence approach, namely artificial bee colony (ABC) is utilized to optimize parameters of least squares support vector machines. Considering critical issues such as enriching the searching strategy and preventing over fitting, two modifications to the original ABC are introduced. By using commodities prices time series as empirical data, the proposed technique is compared against two techniques, including Back Propagation Neural Network and by Genetic Algorithm. Empirical results show the capability of the proposed technique in producing higher prediction accuracy for the prices of interested time series data.
TL;DR: An efficient Ordered Distance Vector (ODV) based population seeding technique has been proposed for permutation-coded GA using an elitist service transfer approach and the experimental results advocate that the proposed technique outperforms the existing popular initialization methods in terms of convergence rate, error rate and convergence time.
Abstract: Genetic Algorithm (GA) is a popular heuristic method for dealing complex problems with very large search space. Among various phases of GA, the initial phase of population seeding plays an important role in deciding the span of GA to achieve the best fit w.r.t. the time. In other words, the quality of individual solutions generated in the initial population phase plays a critical role in determining the quality of final optimal solution. The traditional GA with random population seeding technique is quite simple and of course efficient to some extent; however, the population may contain poor quality individuals which take long time to converge with optimal solution. On the other hand, the hybrid population seeding techniques which have the benefit of good quality individuals and fast convergence lacks in terms of randomness, individual diversity and ability to converge with global optimal solution. This motivates to design a population seeding technique with multifaceted features of randomness, individual diversity and good quality. In this paper, an efficient Ordered Distance Vector (ODV) based population seeding technique has been proposed for permutation-coded GA using an elitist service transfer approach. One of the famous combinatorial hard problems of Traveling Salesman Problem (TSP) is being chosen as the testbed and the experiments are performed on different sized benchmark TSP instances obtained from standard TSPLIB  . The experimental results advocate that the proposed technique outperforms the existing popular initialization methods in terms of convergence rate, error rate and convergence time.
TL;DR: The Optimal Steps Model and the Gradient Navigation Model are presented, which produce trajectories similar to each other and are grid-free and free of oscillations, leading to the conclusion that the two major differences are also theTwo major weaknesses of the older models.
Abstract: Cellular automata (CA) and ordinary differential equation (ODE) based models compete for dominance in microscopic pedestrian dynamics. Both are inspired by the idea that pedestrians are subject to forces. However, there are two major differences: In a CA, movement is restricted to a coarse grid and navigation is achieved directly by pointing the movement in the direction of the forces. Force based ODE models operate in continuous space and navigation is computed indirectly through the acceleration vector. We present two models emanating from the CA and ODE approaches that remove these two differences: the Optimal Steps Model and the Gradient Navigation Model. Both models are very robust and produce trajectories similar to each other, bridging the gap between the older models. Both approaches are grid-free and free of oscillations, giving cause to the hypothesis that the two major differences are also the two major weaknesses of the older models.
TL;DR: In this paper, the authors proposed two-stage unsupervised feature selection methods to determine a subset of relevant features to improve the accuracy of the underlying text clustering algorithm, which is a hybrid approach of feature selection and feature extraction.
Abstract: Feature selection is widely used in text clustering to reduce dimensions in the feature space. In this paper, we study and propose two-stage unsupervised feature selection methods to determine a subset of relevant features to improve accuracy of the underlying algorithm. We experiment with hybrid approach of feature selection—feature selection (FS–FS) and feature selection—feature extraction (FS–FE) methods. Initially, each feature in the document is scored on the basis of its importance for the clustering using two different feature selection methods individually Mean-Median (MM) and Mean Absolute Difference (MAD).In the second stage, in two different experiments, we hybridize them with a feature selection method absolute cosine (AC) and a feature extraction method principal component analysis (PCA) to further reduce the dimensions in the feature space. We perform comprehensive experiments to compare FS, FS–FS and FS–FE using k-mean clustering on Reuters-21578 dataset. The experimental results show that the two-stage feature selection methods are more effective to obtain good quality results by the underlying clustering algorithm. Additionally, we observe that FS–FE approach is superior to FS–FS approach.
TL;DR: CSHPSO is a promising new co-swarm PSO which can be used to solve any real constrained optimization problem and is able to give the minimal cost for the ED problem in comparison with the other algorithms considered.
Abstract: This paper proposes a new co-swarm PSO (CSHPSO) for constrained optimization problems, which is obtained by hybridizing the recently proposed shrinking hypersphere PSO (SHPSO) with the differential evolution (DE) approach. The total swarm is subdivided into two sub swarms in such a way that the first sub swarms uses SHPSO and second sub swarms uses DE. Experiments are performed on a state-of-the-art problems proposed in IEEE CEC 2006. The results of the CSHPSO is compared with SHPSO and DE in a variety of fashions. A statistical approach is applied to provide the significance of the numerical experiments. In order to further test the efficacy of the proposed CSHPSO, an economic dispatch (ED) problem with valve points effects for 40 generating units is solved. The results of the problem using CSHPSO is compared with SHPSO, DE and the existing solutions in the literature. It is concluded that CSHPSO is able to give the minimal cost for the ED problem in comparison with the other algorithms considered. Hence, CSHPSO is a promising new co-swarm PSO which can be used to solve any real constrained optimization problem.
TL;DR: F fuzzy bi-criteria optimization model is formulated for component selection under build-or-buy scheme that simultaneously maximizes intra-modular coupling density (ICD) and functionality within the limitation of budget, reliability and delivery time.
Abstract: Component based software system approach is concerned with the system development by integrating components. The component based software construction primarily focuses on the view that software systems can be built up in modular fashion. The modular design is a logical collection of several independent developed components that are assembled with well defined software architecture. These components can be developed in-house or can be obtained commercially from outside market making build versus buy decision an important consideration in development process. Cohesion and coupling (C&C) plays a major role in determining the system quality in terms of reliability, maintainability and availability. Cohesion is defined as the internal interaction of components within the module. On the other hand, coupling is the external interaction of the module with other modules i.e. interaction of components amongst the modules of the software system. High cohesion and low coupling is one of the important criteria for good software design. Intra-modular coupling density (ICD) is a measure that describes the relationship between cohesion and coupling of modules in a modular software system and its value lies between zero and one. This paper deals with the selection of right mix of components for a modular software system using build-or-buy strategy. In this paper, fuzzy bi-criteria optimization model is formulated for component selection under build-or-buy scheme. The model simultaneously maximizes intra-modular coupling density (ICD) and functionality within the limitation of budget, reliability and delivery time. The model is further extended by incorporating the issue of compatibility amongst the components of the modules. A case study is devised to explain the formulated model.
TL;DR: The study examines advantages of the mesoscopic approach for the simulation and finds that modeling efforts are balanced with the necessary level of detail and facilitate quick and simple model creation and simulation.
Abstract: This paper reviews and compares existing approaches for supply chain modeling and simulation and applies the mesoscopic modeling and simulation approach using the simulation software MesoSim, an own development. A simplified real-world supply chain example is modeled with discrete event, mesoscopic and system dynamics simulation. The objective of the study is to compare the process of model creation and its validity using each approach. The study examines advantages of the mesoscopic approach for the simulation. Major benefits of the mesoscopic approach are that modeling efforts are balanced with the necessary level of detail and facilitate quick and simple model creation and simulation.
TL;DR: New color space (IHLS) is introduced in this paper, that it has good performance in facial image segmentation and it is shown to be efficient and less computational complexity and error.
Abstract: Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. One of the most important applications of segmentation is for facial surgical planning. Thresholding method is so common in image segmentation, because it is simple, noise robustness and accurate. In this paper, we recognize and segment the area of lips using optimal thresholding based on bacterial foraging optimization. New color space (IHLS) is introduced in this paper, that it has good performance in facial image segmentation. In order to evaluate the performance of the proposed algorithm, we use three methods to measure accuracy. The proposed algorithm has less computational complexity and error and it is also efficient.
TL;DR: A mathematical model and numerical simulations corresponding to severe slugging in air-water pipeline-riser systems are presented, showing an improvement in a model previously published by the author, including inertial effects.
Abstract: A mathematical model and numerical simulations corresponding to severe slugging in air-water pipeline-riser systems are presented. The mathematical model considers continuity equations for liquid and gas phases, with a simplified momentum equation for the mixture. A drift-flux model, evaluated for the local conditions in the riser, is used as a closure law. In many models appearing in the literature, propagation of pressure waves is neglected both in the pipeline and in the riser. Besides, variations of void fraction in the stratified flow in the pipeline are also neglected and the void fraction obtained from the stationary state is used in the simulations. This paper shows an improvement in a model previously published by the author, including inertial effects. In the riser, inertial terms are taken into account by using the rigid water-hammer approximation. In the pipeline, the local acceleration of the water and gas phases are included in the momentum equations for stratified flow, allowing to calculate the instantaneous values of pressure drop and void fraction. The developed model predicts the location of the liquid accumulation front in the pipeline and the liquid level in the riser, so it is possible to determine which type of severe slugging occurs in the system. A comparison is made with experimental results published in literature including a choke valve and gas injection at the bottom of the riser, showing very good results for slugging cycle and stability maps. Simulations were also made assessing the effect of different strategies to mitigate severe slugging, such as choking, gas injection and increase in separation pressure, showing correct trends.
TL;DR: The proposed strategy, named as Self Balanced Differential Evolution (SBDE), balances the exploration and exploitation capability of the DE and is tested over 30 benchmark optimization problems and compared the results with the basic and advanced variants of DE namely, SFLSDE, OBDE and jDE.
Abstract: Differential Evolution (DE) is a well known and simple population based probabilistic approach for global optimization. It has reportedly outperformed a few Evolutionary Algorithms (EAs) and other search heuristics like the Particle Swarm Optimization (PSO) when tested over both benchmark and real world problems. But, DE, like other probabilistic optimization algorithms, sometimes behave prematurely in convergence. Therefore, in order to avoid stagnation while keeping a good convergence speed for DE, two modifications are proposed: one is the introduction of a new control parameter, Cognitive Learning Factor (CLF) and the other is dynamic setting of scale factor. Both modifications are proposed in mutation process of DE. Cognitive learning is a powerful mechanism that adjust the current position of individuals by a means of some specified knowledge. The proposed strategy, named as Self Balanced Differential Evolution (SBDE), balances the exploration and exploitation capability of the DE. To prove efficiency and efficacy of SBDE, it is tested over 30 benchmark optimization problems and compared the results with the basic DE and advanced variants of DE namely, SFLSDE, OBDE and jDE. Further, a real-world optimization problem, namely, Spread Spectrum Radar Polly phase Code Design, is solved to show the wide applicability of the SBDE.
TL;DR: The authors intend to indicate that the proposed ensemble paradigm can efficiently optimize the operating parameters of a large scale power system which includes different mechanical components and reveal that E-MSBA inherits some positive features of the MSBA algorithm.
Abstract: The aim of the current study is to probe the potentials of ensemble bio-inspired approaches to handle the deficiencies associated with designing large scale power systems. Ensemble computing has been proven to be a very promising paradigm. The fundamental motivation behind designing such bio-inspired optimization models lies in the fact that interactions among different sole optimizers can afford much better income as compared with an individual optimizer. To do so, the authors propose an optimization technique called ensemble mutable smart bee algorithm (E-MSBA) which is based on the aggregation of several independent low-level optimizers. Here, each low-level unit of the proposed ensemble framework uses mutable smart bee algorithm (MSBA) for optimization procedure. The main provocations behind selecting MSBAs of different properties as components of ensemble are twofold. On the one hand, MSBA proved its capability for handling multimodal constraint problems. On the other hand, based on different experiments, it was demonstrated that MSBA can find the optimum solution with a relatively low computational cost. In this study, the authors intend to indicate that the proposed ensemble paradigm can efficiently optimize the operating parameters of a large scale power system which includes different mechanical components. To this end, E-MSBA and some rival methods are taken into account for the optimization procedure. The obtained results reveal that E-MSBA inherits some positive features of the MSBA algorithm. Additionally, it is observed that the ensembling approach enables the proposed method to effectively tackle the flaws associated with optimization of large scale problems.
TL;DR: This paper shows the effectiveness of RAMSAS through a real case study concerning the reliability analysis of an Attitude Determination and Control System (ADCS) of a satellite.
Abstract: Reliability analysis of modern large-scale systems is a challenging task which could benefit from the jointly exploitation of recent model-based approaches and simulation techniques to flexibly evaluate the system reliability performances and compare different design choices. In this context, RAMSAS, a model-based method which supports the reliability analysis of systems through simulation by combining the benefits of popular OMG modeling languages with wide adopted simulation and analysis environments, has been recently proposed. This paper shows the effectiveness of RAMSAS through a real case study concerning the reliability analysis of an Attitude Determination and Control System (ADCS) of a satellite.
TL;DR: A system architecture for a mobile health-monitoring platform based on a wireless body area network (WBAN) is proposed and the use of this platform in a wide area is shown to detect and to track disease movement in the case of epidemic situation.
Abstract: This paper aims to propose a system architecture for a mobile health-monitoring platform based on a wireless body area network (WBAN). We detail the WBAN features from either hardware and software point of view. The system architecture of this platform is three-tier system. Each tier is detailed. We have designed a flowchart of a use of the WBANs to illustrate the functioning of such platforms. We show the use of this platform in a wide area to detect and to track disease movement in the case of epidemic situation. Indeed, tracking epidemic disease is a very challenging issue. The success of such process could help medical administration to stop diseases quicker than usual. In this study, WBANs deployed over volunteers who agree to carry a light wireless sensor network. Sensors over the body will monitor some health parameters (temperature, pressure, etc) and will run some light classification algorithms to help disease diagnosis. Finally, the WBAN will send aggregated data about the disease to some base stations which collect the results. Our platform will run an on-line disease tracking program and to detect some information about how the disease is propagated.
TL;DR: An approach based on the formalism of the Petri nets is described, several considerations related to this problem are presented, a solving methodology based onThe previous work of the authors, as well as a case-study to illustrate the main concepts.
Abstract: The management of certain systems, such as manufacturing facilities, supply chains, or communication networks implies assessing the consequences of decisions, aimed for the most efficient operation. This kind of systems usually shows complex behaviors where subsystems present parallel evolutions and synchronizations. Furthermore, the existence of global objectives for the operation of the systems and the changes that experience the systems or their environment during their evolution imply a more or less strong dependence between decisions made at different time points of the life cycle. This paper addresses a complex problem that is scarcely present in the scientific literature: the sequences of decisions aimed for achieving several objectives simultaneously and with strong influence from one decision to the rest of them. In this case, the formal statement of the decision problem should take into account the whole decision sequence, making impractical the solving paradigm of “divide and conquer”. Only an integrated methodology may afford a realistic solution of such a type of decision problem. In this paper, an approach based on the formalism of the Petri nets is described, several considerations related to this problem are presented, a solving methodology based on the previous work of the authors, as well as a case-study to illustrate the main concepts.
TL;DR: The investigation of how the influential affect the metrics and predictivity of multiple linear regressions on a set of phenolic compounds with toxicity on Tetrahymena pyriformis using standardized residuals and Cook's distance approaches proved higher accuracy and robustness in terms of sensitivity while Di-model proved robustnessIn terms of specificity.
Abstract: The investigation of how the influential affect the metrics and predictivity of multiple linear regressions on a set of phenolic compounds with toxicity on Tetrahymena pyriformis is presented. The investigation of influential was conducted using standardized residuals (ri-model) and Cook's distance (Di-model) approaches. The applied approaches let to improvement of model's metrics, robustness and accuracy on the investigated sample. Overall, the ri-model proved higher accuracy and robustness in terms of sensitivity while Di-model proved robustness in terms of specificity. Characterization of the withdrawn compounds is essential for advance in developing models for the toxicity of phenols.
TL;DR: The novelty of this investigation is the presentation of an approach which allows a direct computing of the infinitesimal generator describing the customers behavior and the channels allocation in as small cell, neither generating nor storing the reachability set.
Abstract: This paper aims at presenting an approach to study performance and reliability of Small Cell Networks, taking into account the retrial phenomenon, the finite number of customers (mobiles) served in a cell and the random breakdowns of the base station channels. We consider the classical disciplines namely, active and dependent breakdowns and moreover we propose new breakdowns disciplines, in which we give to the interrupted customers due to a channel failure, a higher priority compared to other customers. To this end, we use the Generalized Stochastic Petri Nets (GSPNs) model as a support. However, one of the major drawbacks of this high-level formalism in performance evaluation of large networks is the state space explosion problem which increases when considering repeated calls and multiple unreliable channels. Hence, the novelty of this investigation is the presentation, for the different breakdowns disciplines with and without priority, of an approach which allows a direct computing of the infinitesimal generator describing the customers behavior and the channels allocation in as small cell, neither generating nor storing the reachability set. In addition, we develop the formulas of the main stationary performance and reliability indices, as a function of the network parameters, the stationary probabilities and independently of the reachability set markings. Through numerical examples, we discuss the effect of retrials, breakdowns disciplines and the priority on performances.
TL;DR: This paper shows how the post-transcriptional regulation mechanism mediated by miRNAs has been included in an enhanced BN-based model, and resorts to the miR-7 in two Drosophila cell fate determination networks to verify the effectiveness of mi RNAs modeling in BNs.
Abstract: Gene regulatory networks (GRNs) model some of the mechanisms that regulate gene expression. Among the computational approaches available to model and study GNRs, Boolean network (BN) emerged as very successful to better understand both the structural and dynamical properties of GRNs. Nevertheless, the most widely used models based on BNs do not include any post-transcriptional regulation mechanism. Since miRNAs have been proved to play an important regulatory role, in this paper we show how the post-transcriptional regulation mechanism mediated by miRNAs has been included in an enhanced BN-based model. We resort to the miR-7 in two Drosophila cell fate determination networks to verify the effectiveness of miRNAs modeling in BNs, by implementing it in our tool for the analysis of Boolean networks.