scispace - formally typeset
Search or ask a question

Showing papers in "BioSystems in 2005"


Journal ArticleDOI
Tong Zhou1, Wanjun Gu1, Jianmin Ma1, Xiao Sun1, Zuhong Lu1 
TL;DR: It is observed that the codon usage pattern of H5N1 virus is similar with other influenza A viruses, but not influenza B virus, and the synonymouscodon usage in influenza A virus genes is phylogenetically conservative, butNot strain-specific.
Abstract: In this study, we calculated the codon usage bias in H5N1 virus and performed a comparative analysis of synonymous codon usage patterns in H5N1 virus, five other evolutionary related influenza A viruses and a influenza B virus. Codon usage bias in H5N1 genome is a little slight, which is mainly determined by the base compositions on the third codon position. By comparing synonymous codon usage patterns in different viruses, we observed that the codon usage pattern of H5N1 virus is similar with other influenza A viruses, but not influenza B virus, and the synonymous codon usage in influenza A virus genes is phylogenetically conservative, but not strain-specific. Synonymous codon usage in genes encoded by different influenza A viruses is genus conservative. Compositional constraints could explain most of the variation of synonymous codon usage among these virus genes, while gene function is also correlated to synonymous codon usages to a certain extent. However, translational selection and gene length have no effect on the variations of synonymous codon usage in these virus genes.

151 citations


Journal ArticleDOI
TL;DR: It is possible to suggest a mechanism that makes compatible the different theories of the origin of the code, even if these are based on a historical or physicochemical determinism and thus appear incompatible by definition.
Abstract: A review of the main theories proposed to explain the origin of the genetic code is presented. I analyze arguments and data in favour of different theories proposed to explain the origin of the organization of the genetic code. It is possible to suggest a mechanism that makes compatible the different theories of the origin of the code, even if these are based on a historical or physicochemical determinism and thus appear incompatible by definition. Finally, I discuss the question of why a given number of synonymous codons was attributed to the amino acids in the genetic code.

147 citations


Journal ArticleDOI
TL;DR: The results show that the synaptic pruning that occurs in large networks of simulated spiking neurons in the absence of specific input patterns of activity is capable of generating spontaneously emergent cell assemblies.
Abstract: Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. This article studies the synaptic pruning that occurs in large networks of simulated spiking neurons in the absence of specific input patterns of activity. The evolution of connections between neurons were governed by an original bioinspired spike-timing-dependent synaptic plasticity (STDP) modification rule which included a slow decay term. The network reached a steady state with a bimodal distribution of the synaptic weights that were either incremented to the maximum value or decremented to the lowest value. After 1x10(6) time steps the final number of synapses that remained active was below 10% of the number of initially active synapses independently of network size. The synaptic modification rule did not introduce spurious biases in the geometrical distribution of the remaining active projections. The results show that, under certain conditions, the model is capable of generating spontaneously emergent cell assemblies.

116 citations


Journal ArticleDOI
TL;DR: The way of holding a key in common just between specific two persons is shown, and constituted a novel method for the key distribution based on the public-key system using DNA.
Abstract: Novel public-key system using DNA has been developed. To solve key distribution problem, the public-key cryptography system based on the one-way function has been developed. The message-encoded DNA hidden in dummies can be restored by PCR amplification, followed by sequencing. We used these operations as a one-way function, and constituted a novel method for the key distribution based on the public-key system using DNA. We will show the way of holding a key in common just between specific two persons.

101 citations


Journal ArticleDOI
TL;DR: Looking at sustainable systems as organisms provides fresh insights on sustainability, and offers diagnostic criteria for sustainability that reflect the system's health, as well as exploring the common features between organisms and ecosystems.
Abstract: Schrodinger [Schrodinger, E., 1944. What is Life? Cambridge University Press, Cambridge] marvelled at how the organism is able to use metabolic energy to maintain and even increase its organisation, which could not be understood in terms of classical statistical thermodynamics. Ho [Ho, M.W., 1993. The Rainbow and the Worm, The Physics of Organisms, World Scientific, Singapore; Ho, M.W., 1998a. The Rainbow and the Worm, The Physics of Organisms, 2nd (enlarged) ed., reprinted 1999, 2001, 2003 (available online from ISIS website www.i-sis.org.uk)] outlined a novel "thermodynamics of organised complexity" based on a nested dynamical structure that enables the organism to maintain its organisation and simultaneously achieve non-equilibrium and equilibrium energy transfer at maximum efficiency. This thermodynamic model of the organism is reminiscent of the dynamical structure of steady state ecosystems identified by Ulanowicz [Ulanowicz, R.E., 1983. Identifying the structure of cycling in ecosystems. Math. Biosci. 65, 210-237; Ulanowicz, R.E., 2003. Some steps towards a central theory of ecosystem dynamics. Comput. Biol. Chem. 27, 523-530]. The healthy organism excels in maintaining its organisation and keeping away from thermodynamic equilibrium--death by another name--and in reproducing and providing for future generations. In those respects, it is the ideal sustainable system. We propose therefore to explore the common features between organisms and ecosystems, to see how far we can analyse sustainable systems in agriculture, ecology and economics as organisms, and to extract indicators of the system's health or sustainability. We find that looking at sustainable systems as organisms provides fresh insights on sustainability, and offers diagnostic criteria for sustainability that reflect the system's health. In the case of ecosystems, those diagnostic criteria of health translate into properties such as biodiversity and productivity, the richness of cycles, the efficiency of energy use and minimum dissipation. In the case of economic systems, they translate into space-time differentiation or organised heterogeneity, local autonomy and sufficiency at appropriate levels, reciprocity and equality of exchange, and most of all, balancing the exploitation of natural resources--real input into the system--against the ability of the ecosystem to regenerate itself.

101 citations


Journal ArticleDOI
TL;DR: This paper uses mathematical modeling to investigate a novel type of combination therapy in which multiple nodes in a signaling cascade are targeted simultaneously with selective inhibitors, pursuing the hypothesis that such an approach may induce the desired signal attenuation with lower doses of the necessary agents than when one node is targeted in isolation.
Abstract: An increasing awareness of the significance of abnormal signal transduction in tumors and the concomitant development of target-based drugs to selectively modulate aberrantly-activated signaling pathways has given rise to a variety of promising new strategies in cancer treatment. This paper uses mathematical modeling to investigate a novel type of combination therapy in which multiple nodes in a signaling cascade are targeted simultaneously with selective inhibitors, pursuing the hypothesis that such an approach may induce the desired signal attenuation with lower doses of the necessary agents than when one node is targeted in isolation. A mathematical model is presented which builds upon previous theoretical work on EGFR signaling, simulating the effect of administering multiple kinase inhibitors in various combinations. The model demonstrates that attenuation of biochemical signals is significantly enhanced when multiple upstream processes are inhibited, in comparison with the inhibition of a single upstream process. Moreover, this enhanced attenuation is most pronounced in signals downstream of serially-connected target points. In addition, the inhibition of serially-connected processes appears to have a supra-additive (synergistic) effect on the attenuation of downstream signals, owing to the highly non-linear relationships between network parameters and signals.

97 citations


Journal ArticleDOI
TL;DR: Values of the local variation L(V), and a conventional coefficient of variation C(V) for a variety of model point processes are obtained.
Abstract: It has been revealed in our recent study that cortical neurons are categorized into distinct types, according to a new measure of the local variation of inter-spike intervals, LV. In this paper, we obtain values of the local variation LV and a conventional coefficient of variation CV for a variety of model point processes. While the value of CV undergoes large changes by rate fluctuation of the point processes, the value of LV does not undergo large changes by rate fluctuation, and is principally determined by the form of intrinsic interval distribution of the original model point process.

72 citations


Journal ArticleDOI
TL;DR: This analysis suggests that current definition of TFBS core regions in TRANSFAC should be re-examined so as to capture a more precise notion of "cores" and revised definitions provide a better understanding of the nature of transcription factor-DNA binding.
Abstract: Transcription factors are key regulatory elements that control gene expression. The TRANSFAC database represents the largest repository for experimentally derived transcription factor binding sites (TFBS). Understanding TFBS, which are typically conserved during evolution, helps us identify genomic regions related to human health and disease, and regions that might be predictive of patient outcomes. Here we present a statistical analysis of all TFBS in the TRANSFAC database. Our analysis suggests that current definition of TFBS core regions in TRANSFAC should be re-examined so as to capture a more precise notion of "cores." We offer insight into more appropriate definitions of TFBS consensus sequences and core regions. These revised definitions provide a better understanding of the nature of transcription factor-DNA binding and assist with developing algorithms for de novo TFBS discovery as well as finding novel variants of known TFBS.

59 citations


Journal ArticleDOI
TL;DR: A technique to predict an equation using genetic programming that can search topology and numerical parameters of mathematical expression simultaneously and can be applied to identify metabolic reactions from observable time-courses is presented.
Abstract: Increased research aimed at simulating biological systems requires sophisticated parameter estimation methods All current approaches, including genetic algorithms, need pre-existing equations to be functional A generalized approach to predict not only parameters but also biochemical equations from only observable time-course information must be developed and a computational method to generate arbitrary equations without knowledge of biochemical reaction mechanisms must be developed We present a technique to predict an equation using genetic programming Our technique can search topology and numerical parameters of mathematical expression simultaneously To improve the search ability of numeric constants, we added numeric mutation to the conventional procedure As case studies, we predicted two equations of enzyme-catalyzed reactions regarding adenylate kinase and phosphofructokinase Our numerical experimental results showed that our approach could obtain correct topology and parameters that were close to the originals The mean errors between given and simulation-predicted time-courses were 16 × 10 −5 % and 20 × 10 −3 %, respectively Our equation prediction approach can be applied to identify metabolic reactions from observable time-courses

59 citations


Journal ArticleDOI
TL;DR: It is shown how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer.
Abstract: The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.

54 citations


Journal ArticleDOI
TL;DR: This paper proposes a DNA algorithm and demonstrates that if the size of a reduced NP-complete or NP-hard problem is equal to or less than that of the vertex-cover problem, then the proposed algorithm can be directly used for solving the reduction of that problem and Cook's Theorem is correct on DNA-based computing.
Abstract: Cook's Theorem [Cormen, T.H., Leiserson, C.E., Rivest, R.L., 2001. Introduction to Algorithms, second ed., The MIT Press; Garey, M.R., Johnson, D.S., 1979. Computer and Intractability, Freeman, San Fransico, CA] is that if one algorithm for an NP-complete or an NP-hard problem will be developed, then other problems will be solved by means of reduction to that problem. Cook's Theorem has been demonstrated to be correct in a general digital electronic computer. In this paper, we first propose a DNA algorithm for solving the vertex-cover problem. Then, we demonstrate that if the size of a reduced NP-complete or NP-hard problem is equal to or less than that of the vertex-cover problem, then the proposed algorithm can be directly used for solving the reduced NP-complete or NP-hard problem and Cook's Theorem is correct on DNA-based computing. Otherwise, a new DNA algorithm for optimal solution of a reduced NP-complete problem or a reduced NP-hard problem should be developed from the characteristic of NP-complete problems or NP-hard problems.

Journal ArticleDOI
TL;DR: It is shown that an Evolutionary Turing Machine is able to solve nonalgorithmically the halting problem of the Universal Turing Machine and, asymptotically, the best evolutionary algorithm problem, suggesting that thebest evolutionary algorithm does not exist, but it can be potentially indefinitely approximated using evolutionary techniques.
Abstract: We outline a theory of evolutionary computation using a formal model of evolutionary computation – the Evolutionary Turing Machine – which is introduced as the extension of the Turing Machine model. Evolutionary Turing Machines provide a better and a more complete model for evolutionary computing than conventional Turing Machines, algorithms, and Markov chains. The convergence and convergence rate are defined and investigated in terms of this new model. The sufficient conditions needed for the completeness and optimality of evolutionary search are investigated. In particular, the notion of the total optimality as an instance of the multiobjective optimization of the Universal Evolutionary Turing Machine is introduced. This provides an automatic way to deal with the intractability of evolutionary search by optimizing the quality of solutions and search costs simultaneously. Based on a new model a very flexible classification of optimization problem hardness for the evolutionary techniques is proposed. The expressiveness of evolutionary computation is investigated. We show that the problem of the best evolutionary algorithm is undecidable independently of whether the fitness function is time dependent or fixed. It is demonstrated that the evolutionary computation paradigm is more expressive than Turing Machines, and thus the conventional computer science based on them. We show that an Evolutionary Turing Machine is able to solve nonalgorithmically the halting problem of the Universal Turing Machine and, asymptotically, the best evolutionary algorithm problem. In other words, the best evolutionary algorithm does not exist, but it can be potentially indefinitely approximated using evolutionary techniques.

Journal ArticleDOI
TL;DR: One group of genes, mainly belonging to the "METABOLISM" category, was tended to use G- and/or C-ending codons while the other was more biased to choose codons ending with A and/ or U, indicating that there existed large difference both in selection for biased codons or selection intensity among functional categories.
Abstract: The relationship between codon usage and gene function was investigated while considering a dataset of 2106 nuclear genes of Oryza sativa . The results of standard χ 2 test and F -statistic showed that for every 59 synonymous codons, a strongly significant association with gene functional categories existed in rice, indicating that codon usage was generally coordinated with gene function whether it was at the level of individual amino acids or at the level of nucleotides. However, it could not be directly said that the use of every codons differed significantly between any two functional categories. Notably, there existed large difference both in selection for biased codons or selection intensity among functional categories. Therefore, we identified at least two classes of genes: one group of genes, mainly belonging to the “METABOLISM” category, was tended to use G- and/or C-ending codons while the other was more biased to choose codons ending with A and/or U. The latter group contained genes of various functions, especially those genes classified into the “Nuclear Structure” category. These observations will be more important for molecular genetic engineering and genome functional annotation.

Journal ArticleDOI
TL;DR: A combination of the cognitive approach with information paradigms to study landscapes opens new perspectives in the interpretation of ecological complexity.
Abstract: Landscape ecology deals with ecological processes in their spatial context. It shares with ecosystem ecology the primate of emergent ecological disciplines. The aim of this contribution is to approach the definition of landscapes using cognitive paradigms. Neutral-based landscape (NbL), individual-based landscape (IbL) and observed-based landscape (ObL) are defined to explore the cognitive mechanisms. NbL represents the undecoded component of the cognitive matrix. The IbL is the portion of landscape perceived by the biological sensors. ObL is the part of the cognitive matrix perceived using the cultural background of the observer. The perceived landscape (PL) is composed by the sum of these three approaches of landscape perception. Two further types of information (sensu Stonier) are recognized in this process of perception: the compressed information, as it is present inside the cognitive matrix, and the decompressed information that will structure the PL when a semiotic relationship operates between the organisms and the cognitive matrix. Scaling properties of these three PL components are recognized in space and time. In NbL scale seems irrelevant, in IbL the perception is filtered by organismic scaling and in ObL the spatio-temporal scale seems of major importance. Definitively, perception is scale-dependent. A combination of the cognitive approach with information paradigms to study landscapes opens new perspectives in the interpretation of ecological complexity.

Journal ArticleDOI
TL;DR: A new PMBGA-based method for identification of informative genes from microarray data is proposed and it is demonstrated that the gene subsets selected with the technique yield better classification accuracy.
Abstract: Recently, DNA microarray-based gene expression profiles have been used to correlate the clinical behavior of cancers with the differential gene expression levels in cancerous and normal tissues. To this end, after selection of some predictive genes based on signal-to-noise (S2N) ratio, unsupervised learning like clustering and supervised learning like k-nearest neighbor (k NN) classifier are widely used. Instead of S2N ratio, adaptive searches like Probabilistic Model Building Genetic Algorithm (PMBGA) can be applied for selection of a smaller size gene subset that would classify patient samples more accurately. In this paper, we propose a new PMBGA-based method for identification of informative genes from microarray data. By applying our proposed method to classification of three microarray data sets of binary and multi-type tumors, we demonstrate that the gene subsets selected with our technique yield better classification accuracy.

Journal ArticleDOI
TL;DR: A significant difference between deviant and standard evoked potentials was noted for the freely moving animals in the 100-200 ms range following stimulus onset, but no such difference was found in the anesthetized animals.
Abstract: Evoked potentials were recorded from the auditory cortex of both freely moving and anesthetized rats when deviant sounds were presented in a homogenous series of standard sounds (oddball condition). A component of the evoked response to deviant sounds, the mismatch negativity (MMN), may underlie the ability to discriminate acoustic differences, a fundamental aspect of auditory perception. Whereas most MMN studies in animals have been done using simple sounds, this study involved a more complex set of sounds (synthesized vowels). The freely moving rats had previously undergone behavioral training in which they learned to respond differentially to these sounds. Although we found little evidence in this preparation for the typical, epidurally recorded, MMN response, a significant difference between deviant and standard evoked potentials was noted for the freely moving animals in the 100-200 ms range following stimulus onset. No such difference was found in the anesthetized animals.

Journal ArticleDOI
TL;DR: It is demonstrated that QSBs can shift the system to a low steady state, corresponding to an uninduced state and thus, suggests that the use of 3O-C12-HSL antagonists may constitute a promising therapeutic approach against P. aeruginosa involved infections.
Abstract: Pseudomonas aeruginosa is a gram-negative bacterium that causes serious illnesses, particularly in immunocompromised individuals, often with a fatal outcome. The finding that the acylated homoserine lactone quorum sensing (QS) system controls the production of virulence factors in P. aeruginosa makes this system a possible target for antimicrobial therapy. It has been suggested that an N-(3-oxododecanoyl)-homoserine lactone (3O-C12-HSL) antagonist, a QS blocker (QSB), would interfere efficiently with the quorum sensing system in P. aeruginosa and thus reduce the virulence of this pathogen. In this work, a mathematical model of the QS system in P. aeruginosa has been developed. The model was used to virtually add 3O-C12-HSL antagonists that differed in their affinity for the receptor protein and for their ability to mediate degradation of the receptor. The model suggests that very small differences in these parameters for different 3O-C12-HSL antagonists can greatly affect the success of QSB based inhibition of the QS system in P. aeruginosa. Most importantly, it is proposed that the ability of the 3O-C12-HSL antagonist to mediate degradation of LasR is the core parameter for successful QSB based inhibition of the QS system in P. aeruginosa. Finally, this study demonstrates that QSBs can shift the system to a low steady state, corresponding to an uninduced state and thus, suggests that the use of 3O-C12-HSL antagonists may constitute a promising therapeutic approach against P. aeruginosa involved infections.

Journal ArticleDOI
TL;DR: It was shown that the long-range scaling exponent, derived from the DFA, was most significantly correlated with the age, suggesting that it could be a robust index to characterize the development of preterm infants.
Abstract: Heartbeat intervals, which are determined basically by regular excitations of the sinoatrial node, show significant fluctuation referred to as the heart rate variability (HRV). The HRV is mostly due to nerve activities through the sympathetic and parasympathetic branches of the autonomic nervous system (ANS). In recent years, it has been recognized that the HRV shows a greater complexity than ever expected, suggesting that it includes much information about ANS activities. In this study, we investigated relationship between HRV and development in preterm infants. To this end, heartbeat intervals were continuously recorded from 11 preterm infants in NICU. The recording periods were ranging from several days to weeks depending on the individuals. The HRV at various ages was then characterized by several indices. They include power spectrum as well as the mean and standard deviation of the series. For the power spectrum, the low-frequency band power (LF), high-frequency band power (HF), LF/HF (the ratio between LF and HF), β (scaling exponent of the spectrum) were estimated. The detrended fluctuation analysis (DFA) was also employed to obtain short- and long-range scaling exponents. Each of these indices showed a correlation with the age. We showed that the long-range scaling exponent, derived from the DFA, was most significantly correlated with the age, suggesting that it could be a robust index to characterize the development of preterm infants.

Journal ArticleDOI
TL;DR: Two hypotheses are presented that account for the fact that the exchange of water is so fast in RBCs, based on the known rapid exchange across the RBC membrane of ions such as Cl- and HCO3- and solutes such as glucose, all of whose molecular volumes are significantly greater than that of water.
Abstract: Aquaporins are now known to mediate the rapid exchange of water across the plasma membranes of diverse cell types. This exchange has been studied and kinetically characterized in red blood cells (erythrocytes; RBC) from many animal species. In recent years, a favoured method has been one based on NMR spectroscopy. Despite knowledge of their molecular structure the physiological raison d' etre of aquaporins in RBCs is still only speculated upon. Here, we present two hypotheses that account for the fact that the exchange of water is so fast in RBCs. The first is denoted the "oscillating sieve" hypothesis and it posits that known membrane undulations at frequencies up to 30 Hz with displacements up to 0.3 microm are energetically favoured by the high water permeability of the membrane. The second denoted the "water displacement" hypothesis is based on the known rapid exchange across the RBC membrane of ions such as Cl- and HCO3- and solutes such as glucose, all of whose molecular volumes are significantly greater than that of water. The ideas are generalizable to other cell types and organelles.

Journal ArticleDOI
TL;DR: It is shown here that enough quantitative information is available in the neuroanatomical literature to construct neural networks derived from accurate models of cellular connectivity, and a novel putative role for feedforward inhibitory neurons is uncovered.
Abstract: The specific connectivity patterns among neuronal classes can play an important role in the regulation of firing dynamics in many brain regions. Yet most neural network models are built based on vastly simplified connectivity schemes that do not accurately reflect the biological complexity. Taking the rat hippocampus as an example, we show here that enough quantitative information is available in the neuroanatomical literature to construct neural networks derived from accurate models of cellular connectivity. Computational simulations based on this approach lend themselves to a direct investigation of the potential relationship between cellular connectivity and network activity. We define a set of fundamental parameters to characterize cellular connectivity, and are collecting the related values for the rat hippocampus from published reports. Preliminary simulations based on these data uncovered a novel putative role for feedforward inhibitory neurons. In particular, "mopp" cells in the dentate gyrus are suitable to help maintain the firing rate of granule cells within physiological levels in response to a plausibly noisy input from the entorhinal cortex. The stabilizing effect of feedforward inhibition is further shown to depend on the particular ratio between the relative threshold values of the principal cells and the interneurons. We are freely distributing the connectivity data on which this study is based through a publicly accessible web archive (http://www.krasnow.gmu.edu/L-Neuron).

Journal ArticleDOI
TL;DR: In this paper, a standard Hodgkin-Huxley model neuron with a Gaussian white noise input current with drift parameter mu and variance parameter sigma is considered, and partial differential equations of second order are obtained for the first two moments of the time taken to spike from (any) initial state, as functions of the initial values.
Abstract: We consider a standard Hodgkin-Huxley model neuron with a Gaussian white noise input current with drift parameter mu and variance parameter sigma(2) Partial differential equations of second order are obtained for the first two moments of the time taken to spike from (any) initial state, as functions of the initial values The analytical theory for a 2-component (V,m) approximation is also considered Let mu(c) (approximately 415) be the critical value of mu for firing when noise is absent Large sample simulation results are obtained for mu mu(c), for many values of sigma between 0 and 25 For the time to spike, the 2-component approximation is accurate for all sigma when mu=10, for sigma>7 when mu=5 and only when sigma>15 when mu=2 When mu >mu(c), most paths show similar behavior and the moments exhibit smoothly changing behavior as sigma increases Thus there are a different number of regimes depending on the magnitude of mu relative to mu(c): one when mu is small and when mu is large; but three when mu is close to and above mu(c) Both for the Hodgkin-Huxley (HH) system and the 2-component approximation, and regardless of the value of mu, the CV tends to about 13 at the largest value (25) of sigma considered We also discuss in detail the problem of determining the interspike interval and give an accurate method for estimating this random variable by decomposing the interval into stochastic and almost deterministic components

Journal ArticleDOI
TL;DR: This work examines the ISI statistics and discusses these views in a recently published model of interacting cortical areas and shows that temporally modulated inputs lead to ISI statistics which fit better to the neurophysiological data than alternative mechanisms.
Abstract: The response of a cortical neuron to a stimulus can show a very large variability when repeatedly stimulated by exactly the same stimulus. This has been quantified in terms of inter-spike-interval (ISI) statistics by several researchers (e.g., [Softky, W., Koch, C., 1993. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci. 13(1), 334-350.]). The common view is that this variability reflects noisy information processing based on redundant representation in large neuron populations. This view has been challenged by the idea that the apparent noise inherent in brain activity that is not strictly related or temporally coupled to the experiment could be functionally significant. In this work we examine the ISI statistics and discuss these views in a recently published model of interacting cortical areas [Knoblauch, A., Palm, G., 2002. Scene segmentation by spike synchronization in reciprocally connected visual areas. I. Local effects of cortical feedback. Biol. Cybernet. 87(3), 151-167.]. From the results of further single neuron simulations we can isolate temporally modulated synaptic input as a main contributor for high ISI variability in our model and possibly in real neurons. In contrast to alternative mechanisms, our model suggests a function of the temporal modulations for short-term binding and segmentation of figures from background. Moreover, we show that temporally modulated inputs lead to ISI statistics which fit better to the neurophysiological data than alternative mechanisms.

Journal ArticleDOI
TL;DR: This study investigates the utility of a multi-objective evolutionary algorithm (MOEA) for extracting comprehensible and general classifiers from data in the form of rule systems and shows that the algorithm produces less complex classifiers that perform well on unseen data.
Abstract: Extracting comprehensible and general classifiers from data in the form of rule systems is an important task in many problem domains. This study investigates the utility of a multi-objective evolutionary algorithm (MOEA) for this task. Multi-objective evolutionary algorithms are capable of finding several trade-off solutions between different objectives in a single run. In the context of the present study, the objectives to be optimised are the complexity of the rule systems, and their fit to the data. Complex rule systems are required to fit the data well. However, overly complex rule systems often generalise poorly on new data. In addition they tend to be incomprehensible. It is, therefore, important to obtain trade-off solutions that achieve the best possible fit to the data with the lowest possible complexity. The rule systems produced by the proposed multi-objective evolutionary algorithm are compared with those produced by several other existing approaches for a number of benchmark datasets. It is shown that the algorithm produces less complex classifiers that perform well on unseen data.

Journal ArticleDOI
TL;DR: A simple mathematical approach to the cooperativity in RMs formed by dimers of identical receptors and/or by iso-receptors is proposed and the so-called "symmetry rule" has been considered.
Abstract: The phenomenon of receptor-receptor interactions was hypothesized about 20 years ago. It has been demonstrated by now that receptor-receptor interactions between G-protein coupled receptors (GPCRs) occur at plasma membrane level and result in the reciprocal modulation of their binding characteristics (i.e., cooperativity). One of the most important feature of this phenomenon is the concept of cluster of receptors, or receptor mosaic (RM). However, no proper mathematical approach has still been available to characterize RMs as far as their receptor composition, receptor topography and order of receptor activation inside the RM. This paper tries to fill the gap. A simple mathematical approach to the cooperativity in RMs formed by dimers of identical receptors and/or by iso-receptors is proposed. To this aim the so-called "symmetry rule" has been considered. This approach allows to describe by means of a simple energy function the effects of receptor composition (number of dimers), spatial organisation (respective location of the dimers) and order of activation (order according to which the single receptors are ligated) on the integrative cooperativity (index) of the RMs.

Journal ArticleDOI
TL;DR: An empiric rule could be formulated, that life span is a time interval for which the total metabolic energy per life span becomes proportional to body mass of animals and power coefficient k becomes approximately 1.0.
Abstract: A linear relationship exists between total metabolic energy per life span PT ls (kJ) and body mass M (kg) of 54 poikilothermic species (Protozoa, Nematoda, Mollusca, Asteroidae, Insecta, Arachnoidae, Crustacea, Pisces, Amphibia, Reptilia and Snakes): PT ls = A ls * M 1.0838 , where P (kJ/day) is the rate of metabolism and T ls (days) is the life span of animals. The linear coefficient A ls * = 3.7 × 10 5 kJ/kg is the total metabolic energy, exhausted during the life span per 1 kg body mass of animals. This linear coefficient can be regarded as relatively constant metabolic parameter for poikilothermic organisms, ranging from 0.1 × 10 5 to 5.5 × 10 5 kJ/kg, in spite of 17-degree differences between metabolic rate and body mass of animals. A linear relationship between total metabolic energy per life span and body mass of only 48 poikilothermic multicellular animals (without Protozoa) is: PT ls = A ls * M 0.9692 with linear coefficient A ls * = 2.34 × 10 5 kJ/kg. Since a power relationship exists between the rate of metabolism and body mass of animals of the type: P = aM k ( a and k are the alometric constants), an empiric rule could be formulated, that life span is a time interval for which the total metabolic energy per life span becomes proportional to body mass of animals and power coefficient k becomes ≈1.0.

Journal ArticleDOI
TL;DR: The development and use of a synthetic, discrete event, discrete space model that functions as an epithelio-mimetic device (EMD) is reported, intended to facilitate the study of intestinal transport of drug-like compounds.
Abstract: We report the development and use of a synthetic, discrete event, discrete space model that functions as an epithelio-mimetic device (EMD). It is intended to facilitate the study of intestinal transport of drug-like compounds. We represent passive paracellular and transcellular transport, carrier-mediated transport and active efflux using stand-alone components. Systematic verification of the EMD over a wide physiologically realistic range is essential before we can use it to address questions regarding the details of the interacting mechanisms that are believed to influence absorption. We report details of key verification experiments. We demonstrate that this device can generate behaviors similar to those observed in the in vitro Caco-2 transwell system. To do that we used a series of hypothetical drugs and we simulated behaviors for two clinically used drugs, alfentanil and digoxin. The results support the feasibility and practicability of the EMD as a tool to expand the experimental options for better understanding the biological processes involved in intestinal transport and absorption of compounds of interest.

Journal ArticleDOI
TL;DR: A mathematical model consisting of two harmful phytoplankton and zooplankon system will be discussed and the analytical findings will be verified through experimental observations which were carried out on the eastern part of Bay of Bengal for the last three years.
Abstract: Plankton is the basis of the entire aquatic food chain. Phytoplankton, in particular, occupies the first trophic level. Plankton performs services for the Earth: it serves as food for marine life, gives off oxygen and also absorbs half of the carbon dioxide from the Earth's atmosphere. The dynamics of a rapid (or massive) increase or decrease of plankton populations is an important subject in marine plankton ecology and generally termed as a 'bloom'. Harmful algal blooms (HABs) have adverse effects on human health, fishery, tourism, and the environment. In recent years, considerable scientific attention has been given to HABs. Toxic substances released by harmful plankton play an important role in this context. In this paper, a mathematical model consisting of two harmful phytoplankton and zooplankton system will be discussed. The analytical findings will be verified through our experimental observations which were carried out on the eastern part of Bay of Bengal for the last three years.

Journal ArticleDOI
TL;DR: An improved method to infer gene regulatory networks from time-series gene expression data sets, using sparse graph theory to overcome the excessive-parameter problem with an adaptive-connectivity model and fitting algorithm and guarantees that the most parsimonious network structure will be found with its incremental adaptive fitting process.
Abstract: Reverse-engineering of gene networks using linear models often results in an underdetermined system because of excessive unknown parameters. In addition, the practical utility of linear models has remained unclear. We address these problems by developing an improved method, EXpression Array MINing Engine (EXAMINE), to infer gene regulatory networks from time-series gene expression data sets. EXAMINE takes advantage of sparse graph theory to overcome the excessive-parameter problem with an adaptive-connectivity model and fitting algorithm. EXAMINE also guarantees that the most parsimonious network structure will be found with its incremental adaptive fitting process. Compared to previous linear models, where a fully connected model is used, EXAMINE reduces the number of parameters by O(N), thereby increasing the chance of recovering the underlying regulatory network. The fitting algorithm increments the connectivity during the fitting process until a satisfactory fit is obtained. We performed a systematic study to explore the data mining ability of linear models. A guideline for using linear models is provided: If the system is small (3-20 elements), more than 90% of the regulation pathways can be determined correctly. For a large-scale system, either clustering is needed or it is necessary to integrate information in addition to expression profile. Coupled with the clustering method, we applied EXAMINE to rat central nervous system development (CNS) data with 112 genes. We were able to efficiently generate regulatory networks with statistically significant pathways that have been predicted previously.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the wave function of any superposition photon state or states is always objectively and stochastically changed within the complex architecture of the eye in a continuous linear process initially for most of the superposed photons, followed by a discontinuous nonlinear collapse process later for any remaining superposition photons, thereby guaranteeing that only final, measured information is presented to the brain, mind or consciousness.
Abstract: An analysis has been performed of the theories and postulates advanced by von Neumann, London and Bauer, and Wigner, concerning the role that consciousness might play in the collapse of the wave function, which has become known as the measurement problem. This reveals that an error may have been made by them in the area of biology and its interface with quantum mechanics when they called for the reduction of any superposition states in the brain through the mind or consciousness. Many years later Wigner changed his mind to reflect a simpler and more realistic objective position which appears to offer a way to resolve this issue. The argument is therefore made that the wave function of any superposed photon state or states is always objectively and stochastically changed within the complex architecture of the eye in a continuous linear process initially for most of the superposed photons, followed by a discontinuous nonlinear collapse process later for any remaining superposed photons, thereby guaranteeing that only final, measured information is presented to the brain, mind or consciousness. An experiment to be conducted in the near future may enable us to simultaneously resolve the measurement problem and also determine if the linear nature of quantum mechanics is violated by the perceptual process.

Journal ArticleDOI
TL;DR: The Gompertz function is a solution of the operator differential equation with the Morse-like anharmonic potential, which indicates that distribution of intrasystemic forces is both non-linear and asymmetric.
Abstract: The emergence of Gompertzian dynamics at the macroscopic, tissue level during growth and self-organization is determined by the existence of fractal-stochastic dualism at the microscopic level of supramolecular, cellular system. On one hand, Gompertzian dynamics results from the complex coupling of at least two antagonistic, stochastic processes at the molecular cellular level. It is shown that the Gompertz function is a probability function, its derivative is a probability density function, and the Gompertzian distribution of probability is of non-Gaussian type. On the other hand, the Gompertz function is a contraction mapping and defines fractal dynamics in time-space; a prerequisite condition for the coupling of processes. Furthermore, the Gompertz function is a solution of the operator differential equation with the Morse-like anharmonic potential. This relationship indicates that distribution of intrasystemic forces is both non-linear and asymmetric. The anharmonic potential is a measure of the intrasystemic interactions. It attains a point of the minimum (U(0), t(0)) along with a change of both complexity and connectivity during growth and self-organization. It can also be modified by certain factors, such as retinoids.