scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Modelling and Algorithms in 2005"


Journal ArticleDOI
TL;DR: An accurate and computable definition of network vulnerability is introduced which is directly connected with its topology and its basic properties are discussed and its relationship with other parameters of the network is discussed.
Abstract: The study of the security and stability of complex networks plays a central role in reducing the risk and consequences of attacks or disfunctions of any type. The concept of vulnerability helps to measure the response of complex networks subjected to attacks on vertices and edges and it allows to spot the critical component of a network in order to improve its security. We introduce an accurate and computable definition of network vulnerability which is directly connected with its topology and we analyze its basic properties. We discuss the relationship of the vulnerability with other parameters of the network and we illustrate this with some examples.

70 citations


Journal ArticleDOI
TL;DR: This paper presents a primal-dual interior-point algorithm for SDO problems based on a simple kernel function which was first presented at the Proceedings of Industrial Symposium and Optimization Day, Australia, November 2002; the function is not self-regular.
Abstract: Interior-point methods (IPMs) for semidefinite optimization (SDO) have been studied intensively, due to their polynomial complexity and practical efficiency. Recently, J. Peng et al. introduced so-called self-regular kernel (and barrier) functions and designed primal-dual interior-point algorithms based on self-regular proximities for linear optimization (LO) problems. They also extended the approach for LO to SDO. In this paper we present a primal-dual interior-point algorithm for SDO problems based on a simple kernel function which was first presented at the Proceedings of Industrial Symposium and Optimization Day, Australia, November 2002; the function is not self-regular. We derive the complexity analysis for algorithms based on this kernel function, both with large- and small-updates. The complexity bounds are \(\mathrm{O}(qn)\log\frac{n}{\epsilon}\) and \(\mathrm{O}(q^{2}\sqrt{n})\log\frac{n}{\epsilon}\) , respectively, which are as good as those in the linear case.

54 citations


Journal ArticleDOI
TL;DR: This paper constructs a second-order mimetic discretization using Castillo and Grone’s approach and compares it to other second- order discretizations by applying them to an elliptic boundary value problem in one dimension.
Abstract: Recent investigations by Castillo and Grone have led to a new method for constructing mimetic discretizations of divergence and gradient operators. Their technique, which employs a matrix formulation to incorporate mimetic constraints, is capable of producing approximations whose order at a grid boundary is equal to that in the grid’s interior. In this paper, we construct a second-order mimetic discretization using Castillo and Grone’s approach and compare it to other second-order discretizations by applying them to an elliptic boundary value problem in one dimension. A detailed perturbation analysis is provided to offer some insight into the two discretizations yielding the best numerical results in the study.

40 citations


Journal ArticleDOI
TL;DR: This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics of the biological immune system and is applied to the task of film recommendation by Collaborative Filtering.
Abstract: The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.

35 citations


Journal ArticleDOI
TL;DR: In this paper, the vertex P-center problem is solved using two exact algorithms from the literature: OR-Lib and TSP-Lib, and a simple enhancement which uses tighter initial lower and upper bounds and a more appropriate binary search method are proposed to reduce the number of subproblems to be solved.
Abstract: Enhancements to two exact algorithms from the literature to solve the vertex P-center problem are proposed. In the first approach modifications of some steps are introduced to reduce the number of ILP iterations needed to find the optimal solution. In the second approach a simple enhancement which uses tighter initial lower and upper bounds, and a more appropriate binary search method are proposed to reduce the number of subproblems to be solved. These ideas are tested on two well known sets of problems from the literature (i.e., OR-Lib and TSP-Lib problems) with encouraging results.

33 citations


Journal ArticleDOI
TL;DR: It is shown that for a nonlinear dynamical system, namely, the Belousov–Zhabotinskii chemical reaction, measurement disturbs the equilibrium of the system and causes it to enter into an undesired state if several measurements are performed in parallel.
Abstract: In certain physical systems measuring one variable of the system modifies the values of any number of other variables unpredictably. We show in this paper that under these conditions a parallel approach succeeds in carrying out the required measurement while a sequential approach fails. Specifically, we show that for a nonlinear dynamical system, namely, the Belousov–Zhabotinskii chemical reaction, measurement disturbs the equilibrium of the system and causes it to enter into an undesired state. If, however, several measurements are performed in parallel, the effect of perturbations seems to cancel out and the system remains in a stable state.

17 citations


Journal ArticleDOI
TL;DR: The ℛ-Sausage heuristic described in this paper employs a decomposition technique to explore the point set, and the computational complexity of the heuristic is shown to be O(N2).
Abstract: Given a set V of size N≥4 vertices in a metric space, how can one interconnect them with the possible use of a set S of size M vertices not in the set V, but in the same metric space, so that the cumulative cost of the inter-connections between all the vertices is a minimum? When one uses the Euclidean metric to compute these inter-connections, this is referred to as the Euclidean Steiner Minimal Tree Problem. This is an NP-hard problem. The Steiner Ratio ρ of a vertex set is the length of this Steiner Minimal Tree (SMT), divided by the length of the Minimum Spanning Tree (MST), and is a popular and tractable measure of solution quality. The ℛ-Sausage heuristic described in this paper employs a decomposition technique to explore the point set. The fixed vertices of the set are connected to a set of centroid vertices of Delaunay tetrahedrons. The path topology is preserved as far as possible, together with a cycle prevention rule, where junctions, and deviations from the ℛ-Sausage structure occur. Furthermore, repeated sweeps, with different root vertices are accommodated. The computational complexity of the heuristic is shown to be O(N2). Experimental results with thousands of vertices are presented. Comparisons with an exponential running time Branch and Bound algorithm are also shown.

15 citations


Journal ArticleDOI
TL;DR: An fast and accurate algorithm for calculation of the length of the streamlines based on a 26 voxels neighbors method, which shows that the 3D PDE approach provides a better result than the Euclidean distance.
Abstract: Previous work has shown the importance of thickness measurement in vivo using three-dimensional magnetic resonance imaging (3D MRI). Thickness is defined as the length of trajectories, also called streamlines, which follow the gradient of the solution of the Laplace equation solved between the inner and the outer surfaces of the tissue using Dirichlet conditions.

14 citations


Journal ArticleDOI
TL;DR: Results from numerical simulations for side load generation in rocket nozzles are validated against related data from analytical models that are presently used for rocket engine nozzle design activities to draw conclusions about the validity of the physical model assumptions that the analytical design methods are based on.
Abstract: The present paper validates results from numerical simulations for side load generation in rocket nozzles against related data from analytical models that are presently used for rocket engine nozzle design activities.

12 citations


Journal ArticleDOI
TL;DR: An algorithm for the solution of the one-dimensional atom–dipole interactions problem for a uniform distribution that is based on the combination of the full multigrid and cell multipole methods is presented.
Abstract: We present details of an algorithm for the solution of the one-dimensional atom–dipole interactions problem for a uniform distribution that is based on the combination of the full multigrid and cell multipole methods. The rate of convergence of this technique is faster than current iterative methods used to solve this problem. It can be extended to a three-dimensional algorithm that will allow people to include polarizable interactions in simulations for a more accurate description of simulated systems.

9 citations


Journal ArticleDOI
TL;DR: The problem of minimizing the makespan on a batch processing machine, in which jobs are not all compatible, is considered and the NP-hardness of the general problem is established.
Abstract: We consider the problem of minimizing the makespan on a batch processing machine, in which jobs are not all compatible. Only compatible jobs can be included into the same batch. This relation of compatibility is represented by a split graph. All jobs are available at the same date. The capacity of the batch processing machine is finite or infinite. The processing time of a batch is given by the processing time of the longest job in the batch. We establish the NP-hardness of the general problem and present polynomial algorithms for several special cases.

Journal ArticleDOI
TL;DR: A numerical solution method for Dirichlet boundary value problems in terms of Ito type stochastic differential equations is developed, similar to a recently published approach, but differs primarily in the handling of the boundary.
Abstract: Using an equivalent expression for solutions of second order Dirichlet problems in terms of Ito type stochastic differential equations, we develop a numerical solution method for Dirichlet boundary value problems. It is possible with this idea to solve for solution values of a partial differential equation at isolated points without having to construct any kind of mesh and without knowing approximations for the solution at any other points. Our method is similar to a recently published approach, but differs primarily in the handling of the boundary. Some numerical examples are presented, applying these techniques to model Laplace and Poisson equations on the unit disk.

Journal ArticleDOI
TL;DR: This work develops a computational approach to investigate electromagnetic fields in biological cells exposed to nanopulses, using the finite difference time domain method (FDTD) and a perfectly matched layer to eliminate reflections from the boundary.
Abstract: Short duration, fast rise time ultra-wideband (UWB) electromagnetic pulses (“nanopulses”) are generated by numerous electronic devices in use today. Moreover, many new technologies involving nanopulses are under development and expected to become widely available soon. Study of nanopulse bioeffects is needed to probe their useful range in possible biomedical and biotechnological applications, and to ensure human safety. In this work we develop a computational approach to investigate electromagnetic fields in biological cells exposed to nanopulses. The simulation is based on a z-transformation of the electric displacement and a second-order Taylor approximation of a Cole–Cole expression for the frequency dependence of the dielectric properties of tissues, useful for converting from the frequency domain to the time domain. Maxwell’s equations are then calculated using the finite difference time domain method (FDTD), coupled with a perfectly matched layer to eliminate reflections from the boundary. Numerical results for a biological cell model are presented and discussed.

Journal ArticleDOI
TL;DR: Experimental results give the exact k closest pairs for all the large high-dimensional synthetic and real data sets considered and show that the pruning of the search space is effective.
Abstract: An approximate algorithm to efficiently solve the k-Closest-Pairs problem on large high-dimensional data sets is presented. The algorithm runs, for a suitable choice of the input parameters, in $\mathcal{O}(d^{2}nk)$ time, where d is the dimensionality and n is the number of points of the input data set, and requires linear space in the input size. It performs at most d+1 iterations. At each iteration a shifted version of the data set is sequentially scanned according to the order induced on it by the Hilbert space filling curve and points whose contribution to the solution has already been analyzed are detected and eliminated. The pruning is lossless, in fact the remaining points along with the approximate solution found can be used for the computation of the exact solution. If the data set is entirely pruned, then the algorithm returns the exact solution. We prove that the pruning ability of the algorithm is related to the nearest neighbor distance distribution of the data set and show that there exists a class of data sets for which the method, augmented with a final step that applies an exact method to the reduced data set, calculates the exact solution with the same time requirements. Although we are able to guarantee a $\mathcal{O}(d^{1+{1}/{t}})$ approximation to the solution, where t∈{1,2,. . .,∞} identifies the Minkowski (Lt) metric of interest, experimental results give the exact k closest pairs for all the large high-dimensional synthetic and real data sets considered and show that the pruning of the search space is effective. We present a thorough scaling analysis of the algorithm for in-memory and disk-resident data sets showing that the algorithm scales well in both cases.

Journal ArticleDOI
TL;DR: This paper presents the synthesis of fragmented multi-block data sets and the implementation of an accurate path line integration scheme in order to speed up path line computations and describes a combination of this algorithm with a highly efficient visualization approach of large amounts of particle traces, thus considerably improving interactivity when exploring large scale CFD data sets.
Abstract: The use of Virtual Reality (VR) techniques for the investigation of complex flow phenomena offers distinct advantages in comparison to conventional visualization techniques. Especially for unsteady flows, VR methodology provides an intuitive approach for the exploration of simulated fluid flows. However, the visualization of Computational Fluid Dynamics (CFD) data is often too time-consuming to be carried out in real-time, and thus violates essential constraints concerning real-time interaction and visualization. To overcome this obstacle, we make use of the fact that typically a multi-block approach is employed for domain decomposition, and we use the corresponding data structures for the computation of path lines and for parallelization. In this paper, we present the synthesis of fragmented multi-block data sets and our implementation of an accurate path line integration scheme in order to speed up path line computations. We report on the results of our efforts and describe a combination of this algorithm with a highly efficient visualization approach of large amounts of particle traces, thus considerably improving interactivity when exploring large scale CFD data sets.

Journal ArticleDOI
TL;DR: A technique to generate random data in dimensional space m such that their convex (or positive) hull contains a specific percentage of extreme points (or vectors), determined by the analyst or generator of the data.
Abstract: This paper presents a technique to generate random data in dimensional space m such that their convex (or positive) hull contains a specific percentage of extreme points (or vectors), determined by the analyst or generator of the data. The methodology strives to remove symmetry, regularity, or predictability, which may be desirable in data used to test or compare algorithms or heuristics. There are numerous applications for this methodology.

Journal ArticleDOI
TL;DR: Approximate solution of optimization tasks that can be formalized as minimization of error functionals over admissible sets computable by variable-basis functions (i.e., linear combinations of n-tuples of functions from a given basis) is investigated.
Abstract: Approximate solution of optimization tasks that can be formalized as minimization of error functionals over admissible sets computable by variable-basis functions (i.e., linear combinations of n-tuples of functions from a given basis) is investigated. Estimates of rates of decrease of infima of such functionals over sets formed by linear combinations of increasing number n of elements of the bases are derived, for the case in which such admissible sets consist of Boolean functions. The results are applied to target sets of various types (e.g., sets containing functions representable either by linear combinations of a ???small??? number of generalized parities or by ???small??? decision trees and sets satisfying smoothness conditions defined in terms of Sobolev norms).

Journal ArticleDOI
TL;DR: A new heuristic is presented, a hybrid evolutionary heuristic, which is shown to perform much better than the two existing ones, e.g., the overall average errors of the existing ones are 1.012 and 2.042 while the error of the proposed hybrid evolutionaryHeuristic is 0.154.
Abstract: This research addresses the scheduling problem of multimedia object requests, which is an important problem in information systems, in particular, for World Wide Web applications. The performance measure considered is the variance of response time which is crucial as end users expect fair treatment to their service requests. This problem is known to be NP-hard. The literature survey indicates that two heuristics have been proposed to solve the problem. In this paper, we present a new heuristic, a hybrid evolutionary heuristic, which is shown to perform much better than the two existing ones, e.g., the overall average errors of the existing ones are 1.012 and 2.042 while the error of the proposed hybrid evolutionary heuristic is 0.154.

Journal ArticleDOI
TL;DR: An object-oriented programming framework developed by the authors is utilized in the implementation of parallel finite element software for modeling of the resin transfer molding manufacturing process.
Abstract: The use of object-oriented programming techniques in the development of parallel, finite element analysis software enhances code reuse and increases efficiency during application development. In this paper, an object-oriented programming framework developed by the authors is utilized in the implementation of parallel finite element software for modeling of the resin transfer molding manufacturing process. The motivation for choosing the resin transfer molding finite element application and implementing it with the object-oriented framework is that it was originally developed and parallelized in a functional programming paradigm thus offering the possibility of direct comparisons. Discussion of the software development effort and performance results are presented and analyzed.

Journal ArticleDOI
TL;DR: The purpose of this paper is to describe the construction of a new family of stochastic programming test problems based on the work of Martel and Al-Nuaimi (1973), leading to an extension of their models for which their solution procedure does not apply.
Abstract: Ariyawansa and Felt (2001, 2004) have recently created a test problem collection for testing software for stochastic linear programs. This freely-available, web-based collection was originally created with 35 problem instances from 11 problem families representing a variety of application areas. The collection was created with plans for enriching it with problem instances based on different application areas from the research community. The work of Martel and Al-Nuaimi (1973) on manpower planning under uncertain demand represents an application area suitable for creating new problem instances to be added to the collection. The purpose of this paper is to describe the construction of a new family of stochastic programming test problems based on the work of Martel and Al-Nuaimi (1973). As part of our construction, we review the work of Martel and Al-Nuaimi (1973) leading to an extension of their models for which their solution procedure does not apply. The new test problems are based on this extension. We also present solutions to the test problems obtained using the software package CPA (2002) for stochastic programming developed by Ariyawansa, Felt and Sarich.

Journal ArticleDOI
TL;DR: This work presents a dynamic programming algorithm to solve EMCTKP and a heuristic, called Lazy Iterative Arrangement, which reuses previous EM CTKP solutions to solve new instances of the problem.
Abstract: The Tree Knapsack Problem (TKP) is a 0???1 integer programming problem where hierarchy constraints are enforced. If a node is selected for packing into the knapsack, all the ancestor nodes on the path from the root to the selected node are packed as well. One apparent application of this problem is the simplification of computer graphics models. Real applications also use alternative representations of the nodes or whole subtrees, called impostors, to provide simplified trees that are visually acceptable. To account for this simplification, we introduce a generalized TKP, called Exclusive Multiple Choice Tree Knapsack Problem (EMCTKP). We present a dynamic programming algorithm to solve EMCTKP and a heuristic, called Lazy Iterative Arrangement, which reuses previous EMCTKP solutions to solve new instances of the problem. We show that this algorithm and heuristic reduce significantly the computation time of EMCTKP problems when changes in their parameters have spatial and temporal coherence. We also compare our algorithm with commercial integer programming solvers, and show that in our case the computation time grows linearly with the size of the problem tree and the available resources, while for generic IP solvers it is unpredictable and varies over a wide range of values.

Journal ArticleDOI
TL;DR: The aim of this work is to study the behaviour of a carbon/epoxi post housed in a canine tooth after endodontic treatment in order to support the typical loads present during mastication.
Abstract: The aim of this work is to study the behaviour of a carbon/epoxi post housed in a canine tooth after endodontic treatment in order to support the typical loads present during mastication. The three-D basic design of the dental piece consisting of tooth + post was carried out with a three-dimensional parametric design program. We study the stresses and displacements of the different elements of the dental piece under normal load conditions, and present the results and conclusions.

Journal ArticleDOI
TL;DR: Variographic techniques from spatial statistics are applied to the problem of model selection in local polynomial regression with multivariate data to permit selection of the kernel and smoothing matrix with less computational load and interpretation of the regularity of the regression function in different directions.
Abstract: In this work, we apply variographic techniques from spatial statistics to the problem of model selection in local polynomial regression with multivariate data. These techniques permit selection of the kernel and smoothing matrix with less computational load and interpretation of the regularity of the regression function in different directions. Moreover, they may represent the only feasible alternative for problems of a certain dimensionality.

Journal ArticleDOI
TL;DR: A new algorithm based on a new class of spatial estimators and an appropriate reconstruction operator is proposed to analyze the orientation errors of a celestial reference system.
Abstract: The aim of this paper is to analyze the orientation errors of a celestial reference system. These errors can be obtained by analyzing the differences between the observed and calculated positions for a set of selected minor planets. The classical methods do not work well if the sample is non-homogeneous on the region covered by the observations. In this paper a new algorithm based on a new class of spatial estimators and an appropriate reconstruction operator is proposed.

Journal ArticleDOI
TL;DR: In a linear regression framework, structural change models are proposed for the detection of abrupt changes in parameter values and were successfully applied to phase transition identification in cryogenic thermometry.
Abstract: In a linear regression framework, structural change models are proposed for the detection of abrupt changes in parameter values. Two models are discussed: 1) the pure structural change model, where all the components of the parameter \(\beta \in \mathbb{R}^{{p,1}} \) are allowed to change and are tested all together, and 2) the partial structural change model, where only some of the parameter β components might change. For an on-line implementation, a sliding window algorithm is introduced. The procedure was successfully applied to phase transition identification in cryogenic thermometry.

Journal ArticleDOI
TL;DR: Reduced chi-squared values of approximately unity indicate that both the analytical and numerical methods used for uncertainty estimation produce uncertainties of reasonable size, and indicate that the size of the analytically determined uncertainties can represent thesize of the “true” errors.
Abstract: We have used different multivariate analysis methods to estimate quantities in the fields of food control and atmospheric remote sensing. In order to estimate the uncertainties in these estimates we studied analytical as well as non-parametric numerical methods. The methods have been evaluated by comparison between obtained results and independent sets of measurements. We present one test case from each field, including results, where these methods have been applied. For the food control test case reduced chi-squared $${\left( {\chi ^{2}_{ u } } \right)}$$ of approximately unity indicate that both the analytical and numerical methods used for uncertainty estimation produce uncertainties of reasonable size. In the atmospheric remote sensing test case, a $$\chi ^{2}_{ u } = 46$$ indicated that the uncertainties from the numerical method were far too small, whereas a $$\chi ^{2}_{ u } = 1.5$$ indicate that the size of the analytically determined uncertainties can represent the size of the “true” errors.