scispace - formally typeset
Search or ask a question

Showing papers by "Center for Discrete Mathematics and Theoretical Computer Science published in 2015"


Journal ArticleDOI
TL;DR: It is proved that the complex-valued projection neural network is globally stable and convergent to the optimal solution of constrained convex optimization problems of real functions with complex variables.
Abstract: In this paper, we present a complex-valued projection neural network for solving constrained convex optimization problems of real functions with complex variables, as an extension of real-valued projection neural networks. Theoretically, by developing results on complex-valued optimization techniques, we prove that the complex-valued projection neural network is globally stable and convergent to the optimal solution. Obtained results are completely established in the complex domain and thus significantly generalize existing results of the real-valued projection neural networks. Numerical simulations are presented to confirm the obtained results and effectiveness of the proposed complex-valued projection neural network.

79 citations


Journal ArticleDOI
TL;DR: Modelling shows that negative outcomes for measles, chickenpox, and rubella are 5·8 times worse than would be expected in a pre-vaccine era in which the average age at infection would have been lower.
Abstract: Summary Background Childhood vaccination remains the focus of heated public debate. Parents struggle to understand the potential risks associated with vaccination but both parents and physicians assume that they understand the risks associated with infection. This study was done to characterise how modern vaccination practices have altered patient risks from infection. Methods In this modelling study, we use mathematical analysis to explore how modern-era vaccination practices have changed the risks of severe outcomes for some infections by changing the landscape for disease transmission. We show these effects using published data from outbreaks in the USA for measles, chickenpox, and rubella. Calculation of risk estimation was the main outcome of this study. Findings Our calculations show that negative outcomes are 4·5 times worse for measles, 2·2 times worse for chickenpox, and 5·8 times worse for rubella than would be expected in a pre-vaccine era in which the average age at infection would have been lower. Interpretation As vaccination makes preventable illness rarer, for some diseases, it also increases the expected severity of each case. Because estimates of case risks rely on data for severity generated during a pre-vaccine era they underestimate negative outcomes in the modern post-vaccine epidemiological landscape. Physicians and parents should understand when making decisions about their children's health and safety that remaining unvaccinated in a predominantly vaccine-protected community exposes their children to the most severe possible outcomes for many preventable diseases. Funding None.

58 citations


Journal ArticleDOI
TL;DR: Theoretically, it is proved that the proposed complex-valued neural dynamical approach is globally stable and convergent to the optimal solution, and significantly generalizes the real-valued nonlinear Lagrange network completely in the complex domain.

53 citations


Journal ArticleDOI
TL;DR: This approach provides a tool that can be used by all managers to provide testable hypotheses regarding the occurrence of ER in declining populations, suggest empirical studies to better parameterize the population genetics and conservation-relevant vital rates, and identify the DIER period during which management strategies will be most effective for species conservation.
Abstract: Ecological factors generally affect population viability on rapid time scales. Traditional population viability analyses (PVA) therefore focus on alleviating ecological pressures, discounting potential evolutionary impacts on individual phenotypes. Recent studies of evolutionary rescue (ER) focus on cases in which severe, environmentally induced population bottlenecks trigger a rapid evolutionary response that can potentially reverse demographic threats. ER models have focused on shifting genetics and resulting population recovery, but no one has explored how to incorporate those findings into PVA. We integrated ER into PVA to identify the critical decision interval for evolutionary rescue (DIER) under which targeted conservation action should be applied to buffer populations undergoing ER against extinction from stochastic events and to determine the most appropriate vital rate to target to promote population recovery. We applied this model to little brown bats (Myotis lucifugus) affected by white-nose syndrome (WNS), a fungal disease causing massive declines in several North American bat populations. Under the ER scenario, the model predicted that the DIER period for little brown bats was within 11 years of initial WNS emergence, after which they stabilized at a positive growth rate (λ = 1.05). By comparing our model results with population trajectories of multiple infected hibernacula across the WNS range, we concluded that ER is a potential explanation of observed little brown bat population trajectories across multiple hibernacula within the affected range. Our approach provides a tool that can be used by all managers to provide testable hypotheses regarding the occurrence of ER in declining populations, suggest empirical studies to better parameterize the population genetics and conservation-relevant vital rates, and identify the DIER period during which management strategies will be most effective for species conservation.

50 citations


Journal ArticleDOI
TL;DR: Traditional probability theory and the ``less traditional'' computational approach are applied to the case where permutations are drawn from a set of pattern avoiders to produce many empirical moments and mixed moments and data suggests that some random variables are not asymptotically normal in this setting.
Abstract: We study statistical properties of the random variables Xσ(π), the number of occurrences of the pattern σ in the permutation π. We present two contrasting approaches to this problem: traditional probability theory and the “less traditional” computational approach. Through the perspective of the first one, we prove that for any pair of patterns σ and τ , the random variables Xσ and Xτ are jointly asymptotically normal (when the permutation is chosen from Sn). From the other perspective, we develop algorithms that can show asymptotic normality and joint asymptotic normality (up to a point) and derive explicit formulas for quite a few moments and mixed moments empirically, yet rigorously. The computational approach can also be extended to the case where permutations are drawn from a set of pattern avoiders to produce many empirical moments and mixed moments. This data suggests that some random variables are not asymptotically normal in this setting.

45 citations


Journal ArticleDOI
TL;DR: The proposed two frequency domain estimators are maximum a posterior (MAP) and minimum mean square error (MMSE) estimators, respectively, which significantly generalize two single channel optimal frequencydomain estimators of magnitude-squared spectrum.

12 citations


Journal ArticleDOI
TL;DR: This work proposes a novel discrete artificial bee colony algorithm for constructing an obstacle-avoiding rectilinear Steiner tree and employs a modified classic heuristic as the encoder that can produce a feasible solution.
Abstract: The obstacle-avoiding rectilinear Steiner minimal tree (OARSMT) problem is a fundamental problem in very large-scale integrated circuit physical design and can be reduced to the Steiner tree problem in graphs (GSTP), which can be solved by using three types of common methods: classic heuristics, local search algorithms, or computational intelligence algorithms. However, classic heuristics have poor solution qualities; local search algorithms easily fall into the problem of the local optimum; and the searching effects of the existing computational intelligence algorithms are poor for this problem. In order to improve the solution quality, we propose a novel discrete artificial bee colony algorithm for constructing an obstacle-avoiding rectilinear Steiner tree. We first generate the escape graph for the OARSMT problem. Then, we search for a near-optimal solution consisting of some edges of escape graph by using the discrete ABC algorithm. We apply a key-node neighborhood configuration for the local search strategy and introduce two local search operators. We then naturally use a key-node-based encoding scheme for representing the feasible solution and obtain a tight searching scope. We employ a modified classic heuristic as the encoder that can produce a feasible solution. Experiments conducted on both general GSTP and very large-scale integrated circuit instances reveal the superior performance of the proposed method in terms of the solution quality among the state-of-the-art algorithms.

11 citations



Journal ArticleDOI
TL;DR: An improvement of the penalty decomposition method is proposed for the sparse optimization problem, which embeds the AIHT method into the PD method, but avoids their disadvantages.

8 citations


Journal ArticleDOI
TL;DR: Comparisons of experimental results on the International Symposium on Physical Design (ISPD) 2005 and 2006 benchmarks show that the global placement method is promising.
Abstract: The common objective of very large-scale integration (VLSI) placement problem is to minimize the total wirelength, which is calculated by the total half-perimeter wirelength (HPWL). Since the HPWL is not differentiable, various differentiable wirelength approximation functions have been proposed in analytical placement methods. In this paper, we reformulate the HPWL as an $l_{1}$ -norm model of the wirelength function, which is exact but nonsmooth. Based on the $l_{1}$ -norm wirelength model and exact calculation of overlapping areas between cells and bins, a nonsmooth optimization model is proposed for the VLSI global placement problem, and a subgradient method is proposed for solving the nonsmooth optimization problem. Moreover, local convergence of the subgradient method is proved under some suitable conditions. In addition, two enhanced techniques, i.e., an adaptive parameter to control the step size and a cautious strategy for increasing the penalty parameter, are also used in the nonsmooth optimization method. In order to make the placement method scalable, a multilevel framework is adopted. In the clustering stage, the best choice clustering algorithm is modified according to the $l_{1}$ -norm wirelength model to cluster the cells, and the nonsmooth optimization method is recursively used in the declustering stage. Comparisons of experimental results on the International Symposium on Physical Design (ISPD) 2005 and 2006 benchmarks show that the global placement method is promising.

8 citations


Journal ArticleDOI
TL;DR: It is proved that, at the proximate level, size alone is insufficient to explain the tendency for a pair of prospective copulants to elect the male sexual role by virtue of the disparity in the energetic costs of eggs and sperm.
Abstract: We investigate the existence and stability of sexual strategies (sequential hermaphrodite, successive hermaphrodite or gonochore) at a proximate level. To accomplish this, we constructed and analyzed a general dynamical game model structured by size and sex. Our main objective is to study how costs of changing sex and of sexual competition should shape the sexual behavior of a hermaphrodite. We prove that, at the proximate level, size alone is insufficient to explain the tendency for a pair of prospective copulants to elect the male sexual role by virtue of the disparity in the energetic costs of eggs and sperm. In fact, we show that the stability of sequential vs. simultaneous hermaphrodite depends on sex change costs, while the stability of protandrous vs. protogynous strategies depends on competition cost.

Proceedings ArticleDOI
01 Aug 2015
TL;DR: This paper describes a fully nonparametric, scalable, distributed detection algorithm for intrusion/anomaly detection in networks and discusses how this approach addresses a growing trend in distributed attacks while also providing solutions to problems commonly associated with distributed detection systems.
Abstract: In this paper, we describe a fully nonparametric, scalable, distributed detection algorithm for intrusion/anomaly detection in networks. We discuss how this approach addresses a growing trend in distributed attacks while also providing solutions to problems commonly associated with distributed detection systems. We explore the impacts to detection performance from network topology, from the defined range of distributed communication for each node, and from involving only a small percent of total nodes in the network in the distributed detection communication. We evaluate our algorithm using a software-based testing implementation, and demonstrate up to 20% improvement in detection capability over parallel, isolated anomaly detectors for both stealthy port scans and DDoS attacks.

Posted Content
TL;DR: The hardness of k-EvenSet is used to show that for any constant d, unless k-Clique can be solved in n-o(k) time there is no poly(m, n)2^{o(sqrt{k}) time algorithm to decide whether a given set of m points in F_2^n satisfies.
Abstract: This work investigates the hardness of computing sparse solutions to systems of linear equations over F_2. Consider the k-EvenSet problem: given a homogeneous system of linear equations over F_2 on n variables, decide if there exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse solution). While there is a simple O(n^{k/2})-time algorithm for it, establishing fixed parameter intractability for k-EvenSet has been a notorious open problem. Towards this goal, we show that unless k-Clique can be solved in n^{o(k)} time, k-EvenSet has no poly(n)2^{o(sqrt{k})} time algorithm and no polynomial time algorithm when k = (log n)^{2+eta} for any eta > 0. Our work also shows that the non-homogeneous generalization of the problem -- which we call k-VectorSum -- is W[1]-hard on instances where the number of equations is O(k log n), improving on previous reductions which produced Omega(n) equations. We also show that for any constant eps > 0, given a system of O(exp(O(k))log n) linear equations, it is W[1]-hard to decide if there is a k-sparse linear form satisfying all the equations or if every function on at most k-variables (k-junta) satisfies at most (1/2 + eps)-fraction of the equations. In the setting of computational learning, this shows hardness of approximate non-proper learning of k-parities. In a similar vein, we use the hardness of k-EvenSet to show that that for any constant d, unless k-Clique can be solved in n^{o(k)} time there is no poly(m, n)2^{o(sqrt{k}) time algorithm to decide whether a given set of m points in F_2^n satisfies: (i) there exists a non-trivial k-sparse homogeneous linear form evaluating to 0 on all the points, or (ii) any non-trivial degree d polynomial P supported on at most k variables evaluates to zero on approx. Pr_{F_2^n}[P(z) = 0] fraction of the points i.e., P is fooled by the set of points.

01 Jan 2015
TL;DR: A model is presented that allows the USCG to assess novel boat allocations in which stations may “share” boats, thus cutting down on the total number of boats required, thus reducing the total cost.
Abstract: The United States Coast Guard (USCG) presently allocates resources, such as small boats, to fixed locations (stations) for periods of one year or longer. We present a model that allows the USCG to assess novel boat allocations in which stations may “share” boats, thus cutting down on the total number of boats required. A shared boat is assigned to one particular station for each sub-portion of the year. A key innovation of the analysis is to characterize the problem in terms of “sharing paths,” rather than modeling individual boats. The model uses Mixed Integer Programming to capture a subtle set of constraints and business rules such as boat and mission requirements for individual stations, preferred amount of usage (mission hours) for the boats, and limitations on how much sharing should be allowed. The model finds a boat assignment plan (with sharing) that can minimize either the number of boats or the total cost, subject to these various constraints. The scale of operations that is meaningful to the USCG permits adequate solutions to be found on a large laptop computer.

01 Jan 2015
TL;DR: This poster presents a probabilistic model of the response of the immune system to repeated exposure to carbon dioxide and shows clear trends in the number of immune checkpoints and in the patterns of response to carbon monoxide.
Abstract: ID: 1750 Paul Kantor, Christie Nelson, Fred Roberts, William M. Pottenger CCICADA, Rutgers University Piscataway, New Jersey

Proceedings ArticleDOI
01 Nov 2015
TL;DR: In the solver, a alternating direction method combining with proximal point algorithm is used to optimize the VLSI placement problem according to its features, and local convergence of the PADM method is proved under some conditions.
Abstract: In nonlinear global placement methods, the very large scale integration (VLSI) global placement problem is formulated as a nonlinear mathematical programming problem that contains the wirelength objective and the non-overlapping constraints, and it is usually solved by the penalty function method. In this paper, unlike using the penalty function method, a proximal alternating direction method (PADM) based solver is attempted to solve the VLSI global placement problem. In the solver, a alternating direction method combining with proximal point algorithm is used to optimize the VLSI placement problem according to its features. Moreover, local convergence of the PADM method is proved under some conditions. In addition, the multilevel framework is adopted to make the PADM based solver scalable. Preliminary numerical results on the IBM standard cell benchmarks show that the proposed solver is promising.

Journal ArticleDOI
TL;DR: The proposed memetic algorithm uses a local search procedure and a new crossover operator based on the encoding characteristic of the max-cut problem to generate new offsprings and takes into account both the solution quality and the diversity of population to control the population updating.
Abstract: Given an undirected graph G = V, E with a set V of vertices, and a set E of edges with weights, the max-cut problem consists of partitioning all vertices into two independent sets such that the sum of the weights of the edges between different sets is maximised. The max-cut problem is an NP-hard problem. An efficient memetic algorithm is proposed in this paper for the problem. The proposed memetic algorithm uses a local search procedure and a new crossover operator based on the encoding characteristic of the max-cut problem to generate new offsprings. Then the algorithm uses a function, which takes into account both the solution quality and the diversity of population, to control the population updating. Experiments were performed on three sets of benchmark instances of size up to 10,000 vertices. Experiment results and comparisons demonstrate the effectiveness of the proposed algorithm in both solution quality and computational time.