scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Methods in Systems Biology in 2012"


Book ChapterDOI
03 Oct 2012
TL;DR: In this article, the authors present a case study on the use of robustness-guided and statistical model checking approaches for simulating risks due to insulin infusion pump usage by diabetic patients.
Abstract: We present a case study on the use of robustness-guided and statistical model checking approaches for simulating risks due to insulin infusion pump usage by diabetic patients. Insulin infusion pumps allow for a continuous delivery of insulin with varying rates and delivery profiles to help patients self-regulate their blood glucose levels. However, the use of infusion pumps and continuous glucose monitors can pose risks to the patient including chronically elevated blood glucose levels (hyperglycemia) or dangerously low glucose levels (hypoglycemia). In this paper, we use mathematical models of the basic insulin-glucose regulatory system in a diabetic patient, insulin infusion pumps, and the user's interaction with these pumps defined by commonly used insulin infusion strategies for maintaining normal glucose levels. These strategies include common guidelines taught to patients by physicians and certified diabetes educators and have been implemented in commercially available insulin bolus calculators. Furthermore, we model the failures in the devices themselves along with common errors in the usage of the pump. We compose these models together and analyze them using two related techniques: (a) robustness guided state-space search to explore worst-case scenarios and (b) statistical model checking techniques to assess the probabilities of hyper- and hypoglycemia risks. Our technique can be used to identify the worst-case effects of the combination of many different kinds of failures and place high confidence bounds on their probabilities.

35 citations


Book ChapterDOI
03 Oct 2012
TL;DR: A new methodology for identification and analysis of discrete gene networks as defined by Rene Thomas is proposed, supported by a tool chain, and two ways of visualising parametrizations dynamics wrt time-series data are proposed.
Abstract: We propose a new methodology for identification and analysis of discrete gene networks as defined by Rene Thomas, supported by a tool chain: (i) given a Thomas network with partially known kinetic parameters, we reduce the number of acceptable parametrizations to those that fit time-series measurements and reflect other known constraints by an improved technique of coloured LTL model checking performing efficiently on Thomas networks in distributed environment; (ii) we introduce classification of acceptable parametrizations to identify most optimal ones; (iii) we propose two ways of visualising parametrizations dynamics wrt time-series data. Finally, computational efficiency is evaluated and the methodology is validated on bacteriophage λ case study.

31 citations


Book ChapterDOI
03 Oct 2012
TL;DR: Considering the logical (Boolean or multi-valued) asynchronous framework, a reduction strategy for large signalling and regulatory networks is delineated and adequate reduction methods are introduced (preserving reachability of the attractors) and model-checking approaches are proceeded with.
Abstract: Considering the logical (Boolean or multi-valued) asynchronous framework, we delineate a reduction strategy for large signalling and regulatory networks. Consequently, focusing on the core network that drives the whole dynamics, we can check which attractors are reachable from given initial conditions, under fixed or varying environmental conditions. More specifically, the dynamics of logical models are represented by (asynchronous) state transition graphs that grow exponentially with the number of model components. We introduce adequate reduction methods (preserving reachability of the attractors) and proceed with model-checking approaches. Input nodes (that generally represent receptors) and output nodes (that constitute readouts of network behaviours) are each specifically processed to reduce the state space. The proposed approach is made available within GINsim, our software dedicated to the definition and analysis of logical models. The new GINsim functionalities consist in a proper reduction of output components, as well as the corresponding symbolic encoding of logical models for the NuSMV model checker. This encoding also includes a reduction over input components (transferring their values from states to transitions labels). Finally, we demonstrate the interest of the proposed methods through their application to a published large scale model of the signalling pathway involved in T cell activation.

28 citations


Book ChapterDOI
03 Oct 2012
TL;DR: A novel simulator for PDP systems accelerated by the use of the computational power of GPUs is introduced and how to achieve up to a 7x speedup on a NVIDA Tesla C1060 compared to an optimized multicore version on a Intel 4-core i5 Xeon for large systems is shown.
Abstract: Population Dynamics P systems (PDP systems, in short) provide a new formal bio-inspired modeling framework, which has been successfully used by ecologists These models are validated using software tools against actual measurements The goal is to use P systems simulations to adopt a priori management strategies for real ecosystems Software for PDP systems is still in an early stage The simulation of PDP systems is both computationally and data intensive for large models Therefore, the development of efficient simulators is needed for this field In this paper, we introduce a novel simulator for PDP systems accelerated by the use of the computational power of GPUs We discuss the implementation of each part of the simulator, and show how to achieve up to a 7x speedup on a NVIDA Tesla C1060 compared to an optimized multicore version on a Intel 4-core i5 Xeon for large systems Other results and testing methodologies are also included

22 citations


Book ChapterDOI
03 Oct 2012
TL;DR: In this paper, the authors propose Answer Set Programming (ASP), a declarative problem solving paradigm, in which a problem is encoded as a logical program such that its answer sets represent solutions to the problem.
Abstract: A fundamental question in systems biology is the construction and training to data of mathematical models. Logic formalisms have become very popular to model signaling networks because their simplicity allows us to model large systems encompassing hundreds of proteins. An approach to train (Boolean) logic models to high-throughput phospho-proteomics data was recently introduced and solved using optimization heuristics based on stochastic methods. Here we demonstrate how this problem can be solved using Answer Set Programming (ASP), a declarative problem solving paradigm, in which a problem is encoded as a logical program such that its answer sets represent solutions to the problem. ASP has significant improvements over heuristic methods in terms of efficiency and scalability, it guarantees global optimality of solutions as well as provides a complete set of solutions. We illustrate the application of ASP with in silico cases based on realistic networks and data.

19 citations


Book ChapterDOI
03 Oct 2012
TL;DR: The BiOModelkit as discussed by the authors is a modular modelling approach permitting curation, updating, and distributed development of modules through joined community effort overcoming the problem of keeping a combinatorially exploding number of monolithic models up to date.
Abstract: We describe a modular modelling approach permitting curation, updating, and distributed development of modules through joined community effort overcoming the problem of keeping a combinatorially exploding number of monolithic models up to date. For this purpose, the effects of genes and their mutated alleles on downstream components are modeled by composable, metadata-containing Petri net models organized in a database with version control, accessible through a web interface (www.biomodelkit.org). Gene modules can be coupled to protein modules through mRNA modules by specific interfaces designed for the automatic, database-assisted composition. Automatically assembled executable models may then consider cell type-specific gene expression patterns and the resulting protein concentrations. Gene modules and allelic interference modules may represent effects of gene mutation and predict their pleiotropic consequences or uncover complex genotype/phenotype relationships. Forward and reverse engineered modules are fully compatible.

18 citations


Book ChapterDOI
03 Oct 2012
TL;DR: This work emphasizes the ability of PH to deal with large BRNs with incomplete knowledge on cooperations, where Thomas' approach fails because of the combinatorics of parameters.
Abstract: The Process Hitting (PH) is a recently introduced framework to model concurrent processes. Its major originality lies in a specific restriction on the causality of actions, which makes the formal analysis of very large systems tractable. PH is suitable to model Biological Regulatory Networks (BRNs) with complete or partial knowledge of cooperations between regulators by defining the most permissive dynamics with respect to these constraints. On the other hand, the qualitative modeling of BRNs has been widely addressed using Rene Thomas' formalism, leading to numerous theoretical work and practical tools to understand emerging behaviors. Given a PH model of a BRN, we first tackle the inference of the underlying Interaction Graph between components. Then the inference of corresponding Thomas' models is provided using Answer Set Programming, which allows notably an efficient enumeration of (possibly numerous) compatible parametrizations. In addition to giving a formal link between different approaches for qualitative BRNs modeling, this work emphasizes the ability of PH to deal with large BRNs with incomplete knowledge on cooperations, where Thomas' approach fails because of the combinatorics of parameters.

17 citations


Book ChapterDOI
03 Oct 2012
TL;DR: This work proposes a computational framework to design in silico robust bacteria able to overproduce multiple metabolites and introduces a multi-objective optimisation algorithm, called Genetic Design through Multi-Objective (GDMO), and test it in several organisms to maximise the production of key intermediate metabolites.
Abstract: In this work, we propose a computational framework to design in silico robust bacteria able to overproduce multiple metabolites. To this end, we search the optimal genetic manipulations, in terms of knockout, which also guarantee the growth of the organism. We introduce a multi-objective optimisation algorithm, called Genetic Design through Multi-Objective (GDMO), and test it in several organisms to maximise the production of key intermediate metabolites such as succinate and acetate. We obtain a vast set of Pareto optimal solutions; each of them represents an organism strain. For each solution, we evaluate the fragility by calculating three robustness indexes and by exploring reactions and metabolite interactions. Finally, we perform the Sensitivity Analysis of the metabolic model, which finds the inputs with the highest influence on the outputs of the model. We show that our methodology provides effective vision of the achievable synthetic strain landscape and a powerful design pipeline.

14 citations


Book ChapterDOI
03 Oct 2012
TL;DR: It is shown that in the context of the Iyer et al. 67-variable cardiac myocycte model (IMW), it is possible to replace the detailed 13-state probabilistic model of the sodium channel dynamics with a much simpler Hodgkin-Huxley (HH)-like two-state sodium channel model, while only incurring a bounded approximation error.
Abstract: We show that in the context of the Iyer et al. 67-variable cardiac myocycte model (IMW), it is possible to replace the detailed 13-state probabilistic model of the sodium channel dynamics with a much simpler Hodgkin-Huxley (HH)-like two-state sodium channel model, while only incurring a bounded approximation error. The technical basis for this result is the construction of an approximate bisimulation between the HH and IMW sodium channel models, both of which are input-controlled (voltage in this case) CTMCs. The construction of the appropriate approximate bisimulation, as well as the overall result regarding the behavior of this modified IMW model, involves: (1) Identification of the voltage-dependent parameters of the m and h gates in the HH-type channel via a two-step fitting process, carried out over more than 22,000 representative observational traces of the IMW channel. (2) Proving that the distance between observations of the two channels is bounded. (3) Exploring the sensitivity of the overall IMW model to the HH-type sodium-channel approximation. Our extensive simulation results experimentally validate our findings, for varying IMW-type input stimuli.

12 citations


Book ChapterDOI
03 Oct 2012
TL;DR: In this paper, the authors use high-resolution MRI and block-face images to provide supporting volumetric datasets to guide spatial reintegration of 2D histological section data, and present recent developments in sample preparation, data acquisition, and image processing.
Abstract: Cardiac histo-anatomical structure is a key determinant in all aspects of cardiac function. While some characteristics of micro- and macrostructure can be quantified using non-invasive imaging methods, histology is still the modality that provides the best combination of resolution and identification of cellular/sub-cellular substrate identities. The main limitation of histology is that it does not provide inherently consistent three-dimensional (3D) volume representations. This paper presents methods developed within our group to reconstruct 3D histological datasets. It includes the use of high-resolution MRI and block-face images to provide supporting volumetric datasets to guide spatial reintegration of 2D histological section data, and presents recent developments in sample preparation, data acquisition, and image processing.

10 citations


Book ChapterDOI
03 Oct 2012
TL;DR: A discrete theoretical framework based on the analysis of the asymptotic dynamics of biological interaction networks enables a finer analysis providing a decomposition in elementary modules, possibly smaller than strongly connected components.
Abstract: This paper investigates questions related to modularity in biological interaction networks. We develop a discrete theoretical framework based on the analysis of the asymptotic dynamics of biological interaction networks. More precisely, we exhibit formal conditions under which agents of interaction networks can be grouped into modules, forming a modular organisation. Our main result is that the conventional decomposition into strongly connected components fulfills the formal conditions of being a modular organisation. We also propose a modular and incremental algorithm for an efficient equilibria computation. Furthermore, we point out that our framework enables a finer analysis providing a decomposition in elementary modules, possibly smaller than strongly connected components.

Book ChapterDOI
03 Oct 2012
TL;DR: An abstraction technique to divide a graph into local structures is introduced and it is concluded that abstraction is a useful tool to analyze complex molecular reaction systems and measure their complexity.
Abstract: We propose a technique to simulate molecular reaction systems efficiently by abstracting graph models. Graphs (or networks) and their transitions give rise to simple but powerful models for molecules and their chemical reactions. Depending on the purpose of a graph-based model, nodes and edges of a graph may correspond to molecular units and chemical bonds, respectively. This kind of model provides naive simulations of molecular reaction systems by applying chemical kinetics to graph transition. Such naive models, however, can immediately cause a combinatorial explosion of the number of molecular species because combination of chemical bonds is usually unbounded, which makes simulation intractable. To overcome this problem, we introduce an abstraction technique to divide a graph into local structures. New abstracted models for simulating DNA hybridization systems and RNA interference are explained as case studies to show the effectiveness of our abstraction technique. We then discuss the trade-off between the efficiency and exactness of our abstracted models from the aspect of the number of structures and simulation error. We classify molecular reaction systems into three groups according to the assumptions on reactions. The first one allows efficient and exact abstraction, the second one allows efficient but approximate abstraction, and the third one does not reduce the number of structures by abstraction. We conclude that abstraction is a useful tool to analyze complex molecular reaction systems and measure their complexity.

Book ChapterDOI
03 Oct 2012
TL;DR: The potential of HASL based verification in the context of genetic circuits is demonstrated, which allows assessing the "performances" of a biological system beyond the capability of other stochastic logics.
Abstract: The recently introduced Hybrid Automata Stochastic Logic (HASL) establishes a powerful framework for the analysis of a broad class of stochastic processes, namely Discrete Event Stochastic Processes (DESPs) Here we demonstrate the potential of HASL based verification in the context of genetic circuits To this aim we consider the analysis of a model of gene expression with delayed stochastic dynamics, a class of systems whose dynamics includes both Markovian and non-Markovian events We identify a number of relevant properties related to this model, formally express them in HASL terms and, assess them with COSMOS, a statistical model checker for HASL model checking We demonstrate that this allows assessing the "performances" of a biological system beyond the capability of other stochastic logics

Book ChapterDOI
03 Oct 2012
TL;DR: This work presents the Evolving Process Algebra EPA framework which combines an evolutionary computation approach with process algebra modelling to produce parameter distribution data that provides insight into the parameter space of the biological system under investigation.
Abstract: Process algebras are an effective method for defining models of complex interacting biological processes, but defining a model requires expertise from both modeller and domain expert. In addition, even with the right model, tuning parameters to allow model outputs to match experimental data can be difficult. This is the well-known parameter fitting problem. Evolutionary algorithms provide effective methods for finding solutions to optimisation problems with large search spaces and are well suited to investigating parameter fitting problems. We present the Evolving Process Algebra EPA framework which combines an evolutionary computation approach with process algebra modelling to produce parameter distribution data that provides insight into the parameter space of the biological system under investigation. The EPA framework is demonstrated through application to a novel example: T helper cell activation in the immune system in the presence of co-infection.

Book ChapterDOI
03 Oct 2012
TL;DR: It is shown that the time to reach the bimodal distribution depends on the magnitude of cell-to-cell variability, and this time is quantified using the Kullback-Leibler divergence.
Abstract: Bimodal distributions of protein activities in signaling systems are often interpreted as indicators of underlying switch-like responses and bistable dynamics We investigate the emergence of bimodal protein distributions by analyzing a less appreciated mechanism: oscillating signaling systems with varying amplitude, phase and frequency due to cell-to-cell variability We support our analysis by analytical derivations for basic oscillators and numerical simulations of a signaling cascade, which displays sustained oscillations in protein activities Importantly, we show that the time to reach the bimodal distribution depends on the magnitude of cell-to-cell variability We quantify this time using the Kullback-Leibler divergence The implications of our findings for single-cell experiments are discussed

Book ChapterDOI
03 Oct 2012
TL;DR: In this paper, the authors proposed a reduction for cell-to-cell communication based on symmetries of the underlying reaction network, which is applicable to a broad range of highly regular systems.
Abstract: For models of cell-to-cell communication, with many reactions and species per cell, the computational cost of stochastic simulation soon becomes intractable. Deterministic methods, while computationally more efficient, may fail to contribute reliable approximations for those models. In this paper, we suggest a reduction for models of cell-to-cell communication, based on symmetries of the underlying reaction network. To carry out a stochastic analysis that otherwise comes at an excessive computational cost, we apply a moment closure (MC) approach. We illustrate with a community effect, that allows synchronization of a group of cells in animal development. Comparing the results of stochastic simulation with deterministic and MC approximation, we show the benefits of our approach. The reduction presented here is potentially applicable to a broad range of highly regular systems.

Book ChapterDOI
03 Oct 2012
TL;DR: A model of a minimal synthetic gene circuit is developed that describes part of the gene expression machinery in Escherichia coli, and enables the control of the growth rate of the cells during the exponential phase, and is validated in silico with synthetic measurements.
Abstract: We develop and analyze a model of a minimal synthetic gene circuit, that describes part of the gene expression machinery in Escherichia coli, and enables the control of the growth rate of the cells during the exponential phase. This model is a piecewise non-linear system with two variables (the concentrations of two gene products) and an input (an inducer). We study the qualitative dynamics of the model and the bifurcation diagram with respect to the input. Moreover, an analytic expression of the growth rate during the exponential phase as function of the input is derived. A relevant problem is that of identifiability of the parameters of this expression supposing noisy measurements of exponential growth rate. We present such an identifiability study that we validate in silico with synthetic measurements.

Book ChapterDOI
03 Oct 2012
TL;DR: This work has shown that complementary analysis tools do not rely on kinetic information, but on the structure of the model with reactions, which allows for numerical integration, bifurcation analyses, parameter sensitivity analyses, etc.
Abstract: Many models in Systems Biology are described as Ordinary Differential Equations (ODEs), which allows for numerical integration, bifurcation analyses, parameter sensitivity analyses, etc. Before fixing the kinetics and parameter values however, various analyses can be performed on the structure of the model. This approach has rapidly developed in Systems Biology in the last decade, with for instance, the analyses of structural invariants in Petri net representation [4] model reductions by subgraph epimorphims [2], qualitative attractors in logical dynamics or temporal logic properties by analogy to circuit and program verification. These complementary analysis tools do not rely on kinetic information, but on the structure of the model with reactions.

Book ChapterDOI
03 Oct 2012
TL;DR: ManyCell as discussed by the authors is a multiscale simulation software environment for efficient simulation of cellular systems, which allows the integration and simulation of models from different biological scales, but also combines innovative multi-scale methods with distributed computing approaches to accelerate the process of simulating large-scale multiiscale agent-based models.
Abstract: The emergent properties of multiscale biological systems are driven by the complex interactions of their internal compositions usually organized in hierarchical scales. A common representation takes cells as the basic units which are organized in larger structures: cultures, tissues and organs. Within cells there is also a great deal of organization, both structural (organelles) and biochemical (pathways). A software environment capable of minimizing the computational cost of simulating large-scale multiscale models is required to help understand the functional behaviours of these systems. Here we present ManyCell, a multiscale simulation software environment for efficient simulation of such cellular systems. ManyCell does not only allow the integration and simulation of models from different biological scales, but also combines innovative multiscale methods with distributed computing approaches to accelerate the process of simulating large-scale multiscale agent-based models. Thereby opening up the possibilities of understanding the functional behaviour of cellular systems in an efficient way.

Book ChapterDOI
03 Oct 2012
TL;DR: A detailed model of the JAK-STAT pathway in IL-6 signaling is presented as non-trivial case study demonstrating a new database-supported modular modeling method that allows to easily generate and modify coherent, executable models composed from a collection of modules.
Abstract: We present a detailed model of the JAK-STAT pathway in IL-6 signaling as non-trivial case study demonstrating a new database-supported modular modeling method. A module is a self-contained and autonomous Petri net, centred around an individual protein. The modelling approach allows to easily generate and modify coherent, executable models composed from a collection of modules and provides numerous options for advanced biomodel engineering.

Book ChapterDOI
03 Oct 2012
TL;DR: GeneFuncster is a tool that can analyse the functional enrichment in both the short filtered gene lists and full unfiltered gene lists towards both GO and KEGG and provide a comprehensive result visualisation for both databases.
Abstract: Many freely available tools exist for analysing functional enrichment among short filtered or long unfiltered gene lists. These analyses are typically performed against either Gene Ontologies (GO) or KEGG pathways (Kyoto Encyclopedia of Genes and Genomes) database. The functionality to carry out these various analyses is currently scattered in different tools, many of which are also often very limited regarding result visualization. GeneFuncster is a tool that can analyse the functional enrichment in both the short filtered gene lists and full unfiltered gene lists towards both GO and KEGG and provide a comprehensive result visualisation for both databases. GeneFuncster is a simple to use publicly available web tool accessible at http://bioinfo.utu.fi/GeneFuncster .

Book ChapterDOI
03 Oct 2012
TL;DR: This study explores the disjoint and overlapping community structure of an integrated network for a major fungal pathogen of many cereal crops, Fusarium graminearum and shows that genes that lie at the intersection of communities tend to be highly connected and multifunctional.
Abstract: Exploring the community structure of biological networks can reveal the roles of individual genes in the context of the entire biological system, so as to understand the underlying mechanism of interaction In this study we explore the disjoint and overlapping community structure of an integrated network for a major fungal pathogen of many cereal crops, Fusarium graminearum The network was generated by combining sequence, protein interaction and co-expression data We examine the functional characteristics of communities, the connectivity and multi-functionality of genes and explore the contribution of known virulence genes in community structure Disjoint community structure is detected using a greedy agglomerative method based on modularity optimisation The disjoint partition is then converted to a set of overlapping communities, where genes are allowed to belong to more than one community, through the application of a mathematical programming method We show that genes that lie at the intersection of communities tend to be highly connected and multifunctional Overall, we consider the topological and functional properties of proteins in the context of the community structure and try to make a connection between virulence genes and features of community structure Such studies may have the potential to identify functionally important nodes and help to gain a better understanding of phenotypic features of a system

Book ChapterDOI
03 Oct 2012
TL;DR: The aim is to assess the suitability of Bio- PEPA for more detailed modelling of Src trafficking than that of a previous simpler Bio-PEPA model.
Abstract: Bio-PEPA [2], a process algebra developed from PEPA [5] is used to model a process occurring in mammalian cells whereby the Src oncoprotein is trafficked between different parts of the cell [9]. Src is associated with cell movement and adhesion between cells which is linked to tumour formation [9]. A useful model of the protein's behaviour can provide predictions for new experimental hypotheses which may improve our understanding, and in time, lead to new therapies for cancer. The aim is to assess the suitability of Bio-PEPA for more detailed modelling of Src trafficking than that of a previous simpler Bio-PEPA model [4].

Book ChapterDOI
03 Oct 2012
TL;DR: A core Ontology of Biomodelling is presented, which formally defines principle entities of modelling of biological systems, and follows a structural approach for the engineering of biochemical network models.
Abstract: We present a core Ontology of Biomodelling (OBM), which formally defines principle entities of modelling of biological systems, and follows a structural approach for the engineering of biochemical network models. OBM is fully interoperable with relevant resources, e.g. GO, SBML, ChEBI, and the recording of biomodelling knowledge with Ontology of Biomedical investigations (OBI) ensures efficient sharing and re-use of information, reproducibility of developed biomodels, retrieval of information regarding tools, methods, tasks, bio-models and their parts. An initial version of OBM is available at disc.brunel.ac.uk/obm .

Book ChapterDOI
03 Oct 2012
TL;DR: The first results of ongoing work investigating two models of the artificial inducible promoter Tet-On that include epigenetic regulation are presented, considering chromatin states and 1D diffusion of transcription factors that reveal stochastic noise and a memory effect.
Abstract: We present the first results of ongoing work investigating two models of the artificial inducible promoter Tet-On that include epigenetic regulation. We consider chromatin states and 1D diffusion of transcription factors that reveal, respectively, stochastic noise and a memory effect.

Book ChapterDOI
03 Oct 2012
TL;DR: 3-genes genetic oscillator is used as a model of oscillators coupled with quorum sensing, implemented as the production of a diffusive molecule, autoinducer, which stimulates expression of the target gene within the oscillators core, providing a positive feedback.
Abstract: We used 3-genes genetic oscillator as a model of oscillators coupled with quorum sensing, implemented as the production of a diffusive molecule, autoinducer The autoinducer stimulates expression of the target gene within the oscillator's core, providing a positive feedback Previous studies suggest that there is a hysteresis in the system between oscillatory (OS) and stationary (SS) dynamical solutions We question the robustness of these attractors in presence of molecular noise, existing due to small number of molecules in the characteristic processes of gene expression We showed distributions of return times of OS near and within the hysteresis region The SS is revealed by the return times duration increase as the system approaches hysteresis Moreover, the amplitude of stochastic oscillations is larger because of sensitivity of the system to the steady state even outside of the hysteresis The sensitivity is caused by the stochastic drift in the parameter space

Book ChapterDOI
03 Oct 2012
TL;DR: Using a stochastic model of transcription and translation at the nucleotide and codon levels, it is found that the ribosome binding site region sequence affects mean expression rates and in the genetic toggle switch, the sequence is shown to affect the switching frequency.
Abstract: The sequence of a gene determines the protein sequence and structure, but to some extent also the kinetics of protein production. Namely, the DNA and the codon sequence affect the kinetics of transcription and translation elongation, respectively. Here, using a stochastic model of transcription and translation at the nucleotide and codon levels, we investigate the effects of the codon sequence on the dynamics of single gene expression and of a genetic switch. We find that the ribosome binding site region sequence affects mean expression rates. In the genetic toggle switch, the sequence is shown to affect the switching frequency.

Book ChapterDOI
Denis Noble1
03 Oct 2012
TL;DR: This lecture uses an integrative systems biological view of the relationship between genotypes and phenotypes to clarify some conceptual problems in biological debates about causality and highlights the role of non-DNA forms of inheritance.
Abstract: This lecture uses an integrative systems biological view of the relationship between genotypes and phenotypes to clarify some conceptual problems in biological debates about causality. The differential (gene-centric) view is incomplete in a sense analogous to using differentiation without integration in mathematics. Differences in genotype are frequently not reflected in significant differences in phenotype as they are buffered by networks of molecular interactions capable of substituting an alternative pathway to achieve a given phenotype characteristic when one pathway is removed. Those networks integrate the influences of many genes on each phenotype so that the effect of a modification in DNA depends on the context in which it occurs. Mathematical modelling of these interactions can help to understand the mechanisms of buffering and the contextual-dependence of phenotypic outcome, and so to represent correctly and quantitatively the relations between genomes and phenotypes. By incorporating all the causal factors in generating a phenotype, this approach also highlights the role of non-DNA forms of inheritance, and of the interactions at multiple levels.