scispace - formally typeset
Search or ask a question

Showing papers by "Courant Institute of Mathematical Sciences published in 2001"


Journal ArticleDOI
TL;DR: It has long been assumed that sensory neurons are adapted to the statistical properties of the signals to which they are exposed, but recent developments in statistical modeling have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis.
Abstract: ▪ Abstract It has long been assumed that sensory neurons are adapted, through both evolutionary and developmental processes, to the statistical properties of the signals to which they are exposed. Attneave (1954), Barlow (1961) proposed that information theory could provide a link between environmental statistics and neural responses through the concept of coding efficiency. Recent developments in statistical modeling, along with powerful computational tools, have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis for both individual neurons and populations of neurons.

2,280 citations


Journal ArticleDOI
TL;DR: In this paper, a modified version of the minimum compliance topology optimization problem is studied, where the direct dependence of the material properties on its pointwise density is replaced by a regularization of the density field by the mean of a convolution operator.
Abstract: In this article, a modified (‘filtered’) version of the minimum compliance topology optimization problem is studied. The direct dependence of the material properties on its pointwise density is replaced by a regularization of the density field by the mean of a convolution operator. In this setting it is possible to establish the existence of solutions. Moreover, convergence of an approximation by means of finite elements can be obtained. This is illustrated through some numerical experiments. The ‘filtering’ technique is also shown to cope with two important numerical problems in topology optimization, checkerboards and mesh dependent designs. Copyright © 2001 John Wiley & Sons, Ltd.

920 citations


Book
01 Jan 2001
TL;DR: Topological approach: Finite dimensions Topological degree in Banach space Bifurcation theory Further topological methods Monotone operators and the min-max theorem Generalized implicit function theorems Bibliography as mentioned in this paper
Abstract: Topological approach: Finite dimensions Topological degree in Banach space Bifurcation theory Further topological methods Monotone operators and the min-max theorem Generalized implicit function theorems Bibliography.

905 citations


Journal ArticleDOI
TL;DR: Here the authors obtain P(d) asymptotically for all d≤n1/15, where n is the number of vertices, proving as a consequence that γ=3.9±0.1 is obtained.
Abstract: Recently, Barabasi and Albert [2] suggested modeling complex real-world networks such as the worldwide web as follows: consider a random graph process in which vertices are added to the graph one at a time and joined to a fixed number of earlier vertices, selected with probabilities proportional to their degrees. In [2] and, with Jeong, in [3], Barabasi and Albert suggested that after many steps the proportion P(d) of vertices with degree d should obey a power law P(d)αd−γ. They obtained γ=2.9±0.1 by experiment and gave a simple heuristic argument suggesting that γ=3. Here we obtain P(d) asymptotically for all d≤n1/15, where n is the number of vertices, proving as a consequence that γ=3. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 18, 279–290, 2001

891 citations


Journal ArticleDOI
TL;DR: It is shown that this decomposition, with parameters optimized for the statistics of a generic ensemble of natural images or sounds, provides a good characterization of the nonlinear response properties of typical neurons in primary visual cortex or auditory nerve, respectively.
Abstract: We describe a form of nonlinear decomposition that is well-suited for efficient encoding of natural signals. Signals are initially decomposed using a bank of linear filters. Each filter response is then rectified and divided by a weighted sum of rectified responses of neighboring filters. We show that this decomposition, with parameters optimized for the statistics of a generic ensemble of natural images or sounds, provides a good characterization of the nonlinear response properties of typical neurons in primary visual cortex or auditory nerve, respectively. These results suggest that nonlinear response properties of sensory neurons are not an accident of biological implementation, but have an important functional role.

848 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: In this article, the authors describe an attempt at the construction of such algorithms and its implementation using a combination of data structures, application-specific caching policies, and application specific query processing, which can handle 600 events per second for a typical workload containing 6 million subscriptions.
Abstract: Publish/Subscribe is the paradigm in which users express long-term interests (“subscriptions”) and some agent “publishes” events (e.g., offers). The job of Publish/Subscribe software is to send events to the owners of subscriptions satisfied by those events. For example, a user subscription may consist of an interest in an airplane of a certain type, not to exceed a certain price. A published event may consist of an offer of an airplane with certain properties including price. Each subscription consists of a conjunction of (attribute, comparison operator, value) predicates. A subscription closely resembles a trigger in that it is a long-lived conditional query associated with an action (usually, informing the subscriber). However, it is less general than a trigger so novel data structures and implementations may enable the creation of more scalable, high performance publish/subscribe systems. This paper describes an attempt at the construction of such algorithms and its implementation. Using a combination of data structures, application-specific caching policies, and application-specific query processing our system can handle 600 events per second for a typical workload containing 6 million subscriptions.

587 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic mathematical strategy for stochastic climate modeling is developed, and some of the new phenomena in the resulting equations for the climate variables alone are explored, such as the emergence of both unstable linear Langevin models and the need to incorporate both suitable nonlinear effects and multiplicative noise under appropriate circumstances.
Abstract: There has been a recent burst of activity in the atmosphere-ocean sciences community in utilizing stable linear Langevin stochastic models for the unresolved degrees of freedom in stochastic climate prediction. Here a systematic mathematical strategy for stochastic climate modeling is developed, and some of the new phenomena in the resulting equations for the climate variables alone are explored. The new phenomena include the emergence of both unstable linear Langevin stochastic models for the climate mean variables and the need to incorporate both suitable nonlinear effects and multiplicative noise in stochastic models under appropriate circumstances. All of these phenomena are derived from a systematic self-consistent mathematical framework for eliminating the unresolved stochastic modes that is mathematically rigorous in a suitable asymptotic limit. The theory is illustrated for general quadratically nonlinear equations where the explicit nature of the stochastic climate modeling procedure can be elucidated. The feasibility of the approach is demonstrated for the truncated equations for barotropic flow with topography. Explicit concrete examples with the new phenomena are presented for the stochastically forced three-mode interaction equations. The conjecture of Smith and Waleffe [Phys. Fluids 11 (1999), 1608‐1622] for stochastically forced three-wave resonant equations in a suitable regime of damping and forcing is solved as a byproduct of the approach. Examples of idealized climate models arising from the highly inhomogeneous equilibrium statistical mechanics for geophysical flows are also utilized to demonstrate self-consistency of the mathematical approach with the predictions of equilibrium statistical mechanics. In particular, for these examples, the reduced stochastic modeling procedure for the climate variables alone is designed to reproduce both the climate mean and the energy spectrum of the climate variables. c 2001 John Wiley & Sons, Inc.

403 citations


Journal ArticleDOI
TL;DR: This work develops an efficient technique for estimation, and demonstrates in a denoising application that it preserves natural image structure (e.g., edges) and generates global yet structured image models, thereby providing a unified basis for a variety of applications in signal and image processing.

226 citations



Journal ArticleDOI
TL;DR: In this article, the existence of attractors with a single direction of instability and certain controlled behaviors was shown to exist for strongly dissipative maps with d = 2 phase-dimensions.
Abstract: We give simple conditions that guarantee, for strongly dissipative maps, the existence of strange attractors with a single direction of instability and certain controlled behaviors. Only the d= 2 case is treated in this paper, although our approach is by no means limited to two phase-dimensions. We develop a dynamical picture for the attractors in this class, proving they have many of the statistical properties associated with chaos: positive Lyapunov exponents, existence of SRB measures, and exponential decay of correlations. Other results include the geometry of fractal critical sets, nonuniform hyperbolic behavior, symbolic coding of orbits, and formulas for topological entropy.

202 citations


Journal ArticleDOI
TL;DR: 4-8 subdivision is introduced, a new scheme that generalizes the four-directional box spline of class C^4 to surfaces of arbitrary topological type and it is shown that the subdivision surfaces produced by the 4-8 scheme are C-4 continuous almost everywhere, except at extraordinary vertices where they are is C^1-continuous.

Journal ArticleDOI
TL;DR: Several types of validation operators are presented, including rule grouping, filtering, browsing, and redundant rule elimination operators, that allow a human expert validate many individual rules at a time and can validate a significant part of all the initially discovered rules in an acceptable time period.
Abstract: In many e-commerce applications, ranging from dynamic Web content presentation, to personalized ad targeting, to individual recommendations to the customers, it is important to build personalized profiles of individual users from their transactional histories. These profiles constitute models of individual user behavior and can be specified with sets of rules learned from user transactional histories using various data mining techniques. Since many discovered rules can be spurious, irrelevant, or trivial, one of the main problems is how to perform post-analysis of the discovered rules, i.e., how to validate user profiles by separating “good” rules from the “bad.” This validation process should be done with an explicit participation of the human expert. However, complications may arise because there can be very large numbers of rules discovered in the applications that deal with many users, and the expert cannot perform the validation on a rule-by-rule basis in a reasonable period of time. This paper presents a framework for building behavioral profiles of individual users. It also introduces a new approach to expert-driven validation of a very large number of rules pertaining to these users. In particular, it presents several types of validation operators, including rule grouping, filtering, browsing, and redundant rule elimination operators, that allow a human expert validate many individual rules at a time. By iteratively applying such operators, the human expert can validate a significant part of all the initially discovered rules in an acceptable time period. These validation operators were implemented as a part of a one-to-one profiling system. The paper also presents a case study of using this system for validating individual user rules discovered in a marketing application.

Proceedings Article
26 Mar 2001
TL;DR: The CANS architecture is described, and a case study involving a shrink-wrapped client application in a dynamically changing network environment where CANS improves overall user experience is described.
Abstract: Ubiquitous access to sophisticated internet services from diverse end devices across heterogeneous networks requires the injection of additional functionality into the network to handle protocol conversion, data transcoding, and in general bridge disparate network portions. Several researchers have proposed infrastructures for injecting such functionality; however, many challenges remain before these can be widely deployed. CANS is an application-level infrastructure for injecting application-specific components into the network that focuses on three such challenges: (a) efficient and dynamic composition of individual components; (b) distributed adaptation of injected components in response to system conditions; and (c) support for legacy applications and services. The CANS network view comprises applications, stateful services, and data paths built from mobile soft-state objects called drivers. Both services and data paths can be dynamically created and reconfigured: a planning and event propagation model assists in distributed adaptation, and a flexible type-based composition model dictates how new services and drivers are integrated with existing components. Legacy components plug into CANS using an interception layer that virtualizes network bindings and a delegation model. This paper describes the CANS architecture, and a case study involving a shrink-wrapped client application in a dynamically changing network environment where CANS improves overall user experience.

Journal ArticleDOI
TL;DR: Quantum effects on proton transfer through barriers of several kcal/mol in hydrogen bonds are investigated theoretically in malonaldehyde and the quantum nature of the heavy molecular frame atoms is found to substantially enhance proton tunneling.
Abstract: Quantum effects on proton transfer through barriers of several kcal/mol in hydrogen bonds are investigated theoretically in malonaldehyde. Such "intermediate-barrier" proton transfer processes play a key role in the catalytic activity of some enzymes. Tunneling is shown to be significant in this reaction even at room temperature. More importantly, the quantum nature of the heavy molecular frame atoms is found to substantially enhance proton tunneling. These findings have far-reaching implications for common modeling strategies of proton transfer in complex systems such as biomolecules.

Journal ArticleDOI
TL;DR: Numerical results show that the balancing Neumann‐Neumann method for incompressible Stokes equations discretized with mixed finite or spectral elements with discontinuous pressures is quite fast and fully consistent with the theory.
Abstract: This paper describes the unerlying mathematical model and the Balancing Neumann-Neumann methods are introduced and studied for incompressible Stokes equations discretized with mixed finite or spectral elements with discontinuous pressures. After decomposing the original domain of the problem into nonoverlapping subdomains, the interior unknowns, which are the interior velocity component and all except the constant pressure component, of each subdomain problem are implicitly eliminated. The resulting saddle point Schur complement is solved with a Krylov space method with a balancing Neumann-Neumann preconditioner based on the solution of a coarse Stokes problem with a few degrees of freedom per subdomain and on the solution of local Stokes problems with natural %Neumann velocity and essential boundary conditions on the subdomains. This preconditioner is of hybrid form in which the coarse problem is treated multiplicatively while the local problems are treated additively. The condition number of the preconditioned operator is independent of the number of subdomains and is bounded from above by the product of the square of the logarithm of the local number of unknowns in each subdomain and the inverse of the inf-sup constants of the discrete problem and of the coarse subproblem. Numerical results show that the method is quite fast; they are also fully consistent with the theory.

Journal ArticleDOI
TL;DR: Nonlinear simulations of the flow-induced deformation of a high-aspect-ratio elastic filament show that in dilute solutions, flexibility of the fibers causes both increased shear thinning as well as significant nonzero first-normal-stress differences.
Abstract: Using slender-body hydrodynamics, we study the flow-induced deformation of a high-aspect-ratio elastic filament. For a filament of zero rest curvature rotating in a viscous linear shear flow, our model predicts a bifurcation to shape instabilities due to compression by the flow, in agreement with experimental observations. Further, nonlinear simulations of this shape instability show that in dilute solutions, flexibility of the fibers causes both increased shear thinning as well as significant nonzero first-normal-stress differences. These stress differences are positive for small-to-moderate deformations, but negative for large-amplitude flexing of the fibers.

Journal ArticleDOI
TL;DR: In this paper, a simplified intermediate model for analyzing and parameterizing convectively coupled tropical waves is introduced, which has two baroclinic modes of vertical structure: a direct heating mode and a stratiform mode.
Abstract: A simplified intermediate model for analyzing and parameterizing convectively coupled tropical waves is introduced here. This model has two baroclinic modes of vertical structure: a direct heating mode and a stratiform mode. The key essential parameter in these models is the area fraction occupied by deep convection, sc. The unstable convectively coupled waves that emerge from perturbation of a radiative convective equilibrium are discussed in detail through linearized stability analysis. Without any mean flow, for an overall cooling rate of 1 K day21 as the area fraction parameter increases from sc 5 0.0010 to sc 5 0.0014 the waves pass from a regime with stable moist convective damping to a regime of ‘‘stratiform’’ instability with convectively coupled waves propagating at speeds of roughly 15 m s21; instabilities for a band of wavelengths in the supercluster regime, O(1000)‐O(2000) km; and a vertical structure with a ‘‘wave tilt’’ where the temperature structure in the upper troposphere lags behind that in the lower troposphere. Thus, these convectively coupled waves in the model reproduce several key features of convectively coupled waves in the troposphere processed from recent observational data by Wheeler and Kiladis. As the parameter sc is increased further to values such as sc 5 0.01, the band of unstable waves increases and spreads toward a mesoscale wavelength of O(100) km while the same wave structure and quantitative features mentioned above are retained for O(1000) km. A detailed analysis of the temporal development of instability of these convectively coupled waves is presented here. In the first stage of instability, a high convective available potential energy (CAPE) region generates deep convection and a front-to-rear ascending flow with enhanced vertical shear in a stratiform wake region. Thus, these intermediate models may be useful prototypes for studying the parameterization of upscale convective momentum transport due to organized convection. In the second stage of instability, detailed analysis of the CAPE budget establishes that the effects of the second baroclinic mode in the stratiform wake produce new CAPE, which regenerates the first half of the wave cycle. Finally, since these convectively coupled stratiform waves do not require a barotropic mean flow, a barotropic mean flow, which alters the surface fluxes, is added to study its effect on their stability. These effects of a barotropic mean flow are secondary; an easterly mean flow enhances instability of the eastward-propagating convectively coupled waves and diminishes the instability of the westward-propagating waves through a wind-induced surface heat exchange mechanism.

Journal ArticleDOI
TL;DR: A large-scale neuronal network model of layer 4Cα in V1 of the macaque cortex that is based on, and constrained by, realistic cortical anatomy and physiology is developed and shows that neurons in the model respond like simple cells.
Abstract: Simple cells in the striate cortex respond to visual stimuli in an approximately linear manner, although the LGN input to the striate cortex, and the cortical network itself, are highly nonlinear. Although simple cells are vital for visual perception, there has been no satisfactory explanation of how they are produced in the cortex. To examine this question, we have developed a large-scale neuronal network model of layer 4Ca in V1 of the macaque cortex that is based on, and constrained by, realistic cortical anatomy and physiology. This paper has two aims: (1) to show that neurons in the model respond like simple cells. (2) To identify how the model generates this linearized response in a nonlinear network. Each neuron in the model receives nonlinear excitation from the lateral geniculate nucleus (LGN). The cells of the model receive strong (nonlinear) lateral inhibition from other neurons in the model cortex. Mathematical analysis of the dependence of membrane potential on synaptic conductances, and computer simulations, reveal that the nonlinearity of corticocortical inhibition cancels the nonlinear excitatory input from the LGN. This interaction produces linearized responses that agree with both extracellular and intracellular measurements. The model correctly accounts for experimental results about the time course of simple cell responses and also generates testable predictions about variation in linearity with position in the cortex, and the effect on the linearity of signal summation, caused by unbalancing the relative strengths of excitation and inhibition pharmacologically or with extrinsic current.

Journal ArticleDOI
TL;DR: It is demonstrated that the duration of an episode of activity depends on the network excitability at the beginning of the episode, and a positive correlation between episode duration and the preceding inter-episode interval, but not with the following interval, suggesting that episode onset is stochastic whereas episode termination occurs deterministically, when network excitable falls to a fixed level.
Abstract: Spontaneous episodic activity occurs throughout the developing nervous system because immature circuits are hyperexcitable. It is not fully understood how the temporal pattern of this activity is regulated. Here, we study the role of activity-dependent depression of network excitability in the generation and regulation of spontaneous activity in the embryonic chick spinal cord. We demonstrate that the duration of an episode of activity depends on the network excitability at the beginning of the episode. We found a positive correlation between episode duration and the preceding inter-episode interval, but not with the following interval, suggesting that episode onset is stochastic whereas episode termination occurs deterministically, when network excitability falls to a fixed level. This is true over a wide range of developmental stages and under blockade of glutamatergic or GABAergic/glycinergic synapses. We also demonstrate that during glutamatergic blockade the remaining part of the network becomes more excitable, compensating for the loss of glutamatergic synapses and allowing spontaneous activity to recover. This compensatory increase in the excitability of the remaining network reflects the progressive increase in synaptic efficacy that occurs in the absence of activity. Therefore, the mechanism responsible for the episodic nature of the activity automatically renders this activity robust to network disruptions. The results are presented using the framework of our computational model of spontaneous activity in the developing cord. Specifically, we show how they follow logically from a bistable network with a slow activity-dependent depression switching periodically between the active and inactive states.

Journal ArticleDOI
TL;DR: In this article, the results of first-principles molecular-dynamics simulations of molten silicates, based on the density functional formalism, are compared to those of a silica melt.
Abstract: We present the results of first-principles molecular-dynamics simulations of molten silicates, based on the density functional formalism. In particular, the structural properties of a calcium aluminosilicate $[\mathrm{CaO}\ensuremath{-}{\mathrm{Al}}_{2}{\mathrm{O}}_{3}\ensuremath{-}{\mathrm{SiO}}_{2}]$ melt are compared to those of a silica melt. The local structures of the two melts are in good agreement with the experimental understanding of these systems. In the calcium aluminosilicate melt, the number of nonbridging oxygens found is in excess of the number obtained from a simple stoichiometric prediction. In addition, the aluminum avoidance principle, which states that links between ${\mathrm{AlO}}_{4}$ tetrahedra are absent or rare, is found to be violated. Defects such as two-fold rings and five-fold coordinated silicon atoms are found in comparable proportions in both liquids. However, in the calcium aluminosilicate melt, a larger proportion of oxygen atoms are three-fold coordinated. In addition, fivefold coordinated aluminum atoms are observed. Finally evidence of creation and anihilation of nonbridging oxygens is observed, with these oxygens being mostly connected to Si tetrahedra.

Journal ArticleDOI
TL;DR: It is shown how both primal and dual quadrilateral schemes can be implemented in the same code, opening up new possibilities for more flexible geometric modeling applications and p-versions of the Subdivision Element Method.

Journal ArticleDOI
TL;DR: In this article, the authors considered the question of finite-size scaling in percolation by studying a finite box of side length n, both in two and in higher dimensions, and established the same results under a set of hypotheses related to scaling and hyperscaling.
Abstract: We address the question of finite-size scaling in percolation by studying bond percolation in a finite box of side length n, both in two and in higher dimensions. In dimension d= 2, we obtain a complete characterization of finite-size scaling. In dimensions d>2, we establish the same results under a set of hypotheses related to so-called scaling and hyperscaling postulates which are widely believed to hold up to d= 6.

Journal ArticleDOI
TL;DR: Experimental results show the high precision of the proposed classifier and the complementarity of the bioinformatics tools studied in the paper.
Abstract: In this paper we propose new techniques to extract features from protein sequences. We then use the features as inputs for a Bayesian neural network (BNN) and apply the BNN to classifying protein sequences obtained from the PIR (Protein Information Resource) database maintained at the National Biomedical Research Foundation. To evaluate the performance of the proposed approach, we compare it with other protein classifiers built based on sequence alignment and machine learning methods. Experimental results show the high precision of the proposed classifier and the complementarity of the bioinformatics tools studied in the paper.

Book ChapterDOI
16 Nov 2001
TL;DR: This paper proposes to extend traditional two-dimensional user/item recommender systems to support multiple dimensions, as well as comprehensive profiling and hierarchical aggregation (OLAP) capabilities.
Abstract: In this paper, we present a new data-warehousing-based approach to recommender systems. In particular, we propose to extend traditional two-dimensional user/item recommender systems to support multiple dimensions, as well as comprehensive profiling and hierarchical aggregation (OLAP) capabilities. We also introduce a new recommendation query language RQL that can express complex recommendations taking into account the proposed extensions. We describe how these extensions are integrated into a framework that facilitates more flexible and comprehensive user interactions with recommender systems.

Journal ArticleDOI
TL;DR: A macroscopic computer model describing the mechanics of the chromatin fiber on the polymer level shows good agreement with experimental diffusion coefficients and small-angle X-ray scattering data and shows that oligonucleosomes condense and unfold in a salt-dependent manner analogous to the Chromatin fiber.

Journal ArticleDOI
TL;DR: In this article, the authors studied numerically a one-dimensional model of dispersive wave turbulence, which admits a very precise and detailed realization of these turbulent cycles and their components.

Journal ArticleDOI
01 Jun 2001-Science
TL;DR: A sea-ice–ocean computer model simulation vividly demonstrates how a mechanism by which modest variations in the large-scale oceanic flow past the Maud Rise seamount cause a horizontal cyclonic eddy to be shed from its northeast flank fully accounts for the characteristics that mark Weddell Polynya events.
Abstract: Satellite observations have shown the occasional occurrence of a large opening in the sea-ice cover of the Weddell Sea, Antarctica, a phenomenon known as the Weddell Polynya. The transient appearance, position, size, and shape of the polynya is explained here by a mechanism by which modest variations in the large-scale oceanic flow past the Maud Rise seamount cause a horizontal cyclonic eddy to be shed from its northeast flank. The shed eddy transmits a divergent Ekman stress into the sea ice, leading to a crescent-shaped opening in the pack. Atmospheric thermodynamical interaction further enhances the opening by inducing oceanic convection. A sea-ice-ocean computer model simulation vividly demonstrates how this mechanism fully accounts for the characteristics that mark Weddell Polynya events.

Journal ArticleDOI
TL;DR: Since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, the modified Runge-Kutta fourth-order scheme can attain fourth- order accuracy with computational costs normally associated with second-order schemes.
Abstract: To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate- and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step oft = 0.5 × 10 −3 seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10 −5 seconds or 10 −9 seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

Journal ArticleDOI
TL;DR: A one-dimensional model of the population density method for neurons with arbitrary synaptic kinetics is reported, which is more accurate than the mean-field method in the steady state, where the means-field approximation works best, and also under dynamic-stimulus conditions.
Abstract: Population density methods provide promising time-saving alternatives to direct Monte Carlo simulations of neuronal network activity, in which one tracks the state of thousands of individual neurons and synapses. A population density method has been found to be roughly a hundred times faster than direct simulation for various test networks of integrate-and-fire model neurons with instantaneous excitatory and inhibitory post-synaptic conductances. In this method, neurons are grouped into large populations of similar neurons. For each population, one calculates the evolution of a probability density function (PDF) which describes the distribution of neurons over state space. The population firing rate is then given by the total flux of probability across the threshold voltage for firing an action potential. Extending the method beyond instantaneous synapses is necessary for obtaining accurate results, because synaptic kinetics play an important role in network dynamics. Embellishments incorporating more realistic synaptic kinetics for the underlying neuron model increase the dimension of the PDF, which was one-dimensional in the instantaneous synapse case. This increase in dimension causes a substantial increase in computation time to find the exact PDF, decreasing the computational speed advantage of the population density method over direct Monte Carlo simulation. We report here on a one-dimensional model of the PDF for neurons with arbitrary synaptic kinetics. The method is more accurate than the mean-field method in the steady state, where the mean-field approximation works best, and also under dynamic-stimulus conditions. The method is much faster than direct simulations. Limitations of the method are demonstrated, and possible improvements are discussed.

Journal ArticleDOI
TL;DR: A new algorithm, DiSCO (Discrete Surface Charge Optimization), is presented, for economically describing the electrostatic field predicted by Poisson-Boltzmann theory using a discrete set of Debye-Hückel charges distributed on a virtual surface enclosing the macromolecule.
Abstract: Much progress has been achieved on quantitative assessment of electrostatic interactions on the all-atom level by molecular mechanics and dynamics, as well as on the macroscopic level by models of continuum solvation. Bridging of the two representations-an area of active research-is necessary for studying integrated functions of large systems of biological importance. Following perspectives of both discrete (N-body) interaction and continuum solvation, we present a new algorithm, DiSCO (Discrete Surface Charge Optimization), for economically describing the electrostatic field predicted by Poisson-Boltzmann theory using a discrete set of Debye-Huckel charges distributed on a virtual surface enclosing the macromolecule. The procedure in DiSCO relies on the linear behavior of the Poisson-Boltzmann equation in the far zone; thus contributions from a number of molecules may be superimposed, and the electrostatic potential, or equivalently the electrostatic field, may be quickly and efficiently approximated by the summation of contributions from the set of charges. The desired accuracy of this approximation is achieved by minimizing the difference between the Poisson-Boltzmann electrostatic field and that produced by the linearized Debye-Huckel approximation using our truncated Newton optimization package. DiSCO is applied here to describe the salt-dependent electrostatic environment of the nucleosome core particle in terms of several hundred surface charges. This representation forms the basis for modeling-by dynamic simulations (or Monte Carlo)-the folding of chromatin. DiSCO can be applied more generally to many macromolecular systems whose size and complexity warrant a model resolution between the all-atom and macroscopic levels.