scispace - formally typeset
Search or ask a question
Author

Antonella Iuliano

Other affiliations: University of Cambridge, IAC
Bio: Antonella Iuliano is an academic researcher from University of Salerno. The author has contributed to research in topics: Medicine & Telegraph process. The author has an hindex of 7, co-authored 22 publications receiving 151 citations. Previous affiliations of Antonella Iuliano include University of Cambridge & IAC.

Papers
More filters
Journal ArticleDOI
TL;DR: The data support the use of split intein–mediated protein trans-splicing in combination with AAV subretinal delivery for gene therapy of inherited blindness due to mutations in large genes.
Abstract: Retinal gene therapy with adeno-associated viral (AAV) vectors holds promises for treating inherited and noninherited diseases of the eye. Although clinical data suggest that retinal gene therapy is safe and effective, delivery of large genes is hindered by the limited AAV cargo capacity. Protein trans-splicing mediated by split inteins is used by single-cell organisms to reconstitute proteins. Here, we show that delivery of multiple AAV vectors each encoding one of the fragments of target proteins flanked by short split inteins results in protein trans-splicing and full-length protein reconstitution in the retina of mice and pigs and in human retinal organoids. The reconstitution of large therapeutic proteins using this approach improved the phenotype of two mouse models of inherited retinal diseases. Our data support the use of split intein–mediated protein trans-splicing in combination with AAV subretinal delivery for gene therapy of inherited blindness due to mutations in large genes.

81 citations

Journal ArticleDOI
TL;DR: In this article, the authors considered a generalized telegraph process which follows an alternating renewal process and is subject to random jumps, and developed the distribution of the location of the particle at an arbitrary fixed time t, and study this distribution under the assumption of exponentially distributed alternating random times.
Abstract: We consider a generalized telegraph process which follows an alternating renewal process and is subject to random jumps. More specifically, consider a particle at the origin of the real line at time t=0. Then it goes along two alternating velocities with opposite directions, and performs a random jump toward the alternating direction at each velocity reversal. We develop the distribution of the location of the particle at an arbitrary fixed time t, and study this distribution under the assumption of exponentially distributed alternating random times. The cases of jumps having exponential distributions with constant rates and with linearly increasing rates are treated in detail.

27 citations

Journal ArticleDOI
TL;DR: It is demonstrated that paraplegin is required for efficient transient opening of the mPTP, that is impaired in both SPG7 patients-derived fibroblasts and primary neurons from Spg7−/− mice, and dysregulation ofmPTP opening at the pre-synaptic terminal impairs neurotransmitter release leading to ineffective synaptic transmission.

22 citations

Journal ArticleDOI
TL;DR: A clear methodological framework is provided on how network-based Cox regression models can be used to integrate biological knowledge and available data for the analysis of survival data and to provide a clear methodological and computational approach for investigating cancers regulatory networks.
Abstract: International initiatives such as the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) are collecting multiple datasets at different genome-scales with the aim of identifying novel cancer biomarkers and predicting survival patients. To analyze such data, several statistical methods have been applied, among them Cox regression models. Although these models provide a good statistical framework to analyze omic data, there is still a lack of studies that illustrate advantages and drawbacks in integrating biological information and selecting groups of biomarkers. In fact, classical Cox regression algorithms focus on the selection of a single biomarker, without taking into account the strong correlation between genes. Even though network-based Cox regression algorithms overcome such drawbacks, such network-based approaches are less widely used within the life science community. In this article, we aim to provide a clear methodological framework on the use of such approaches in order to turn cancer research results into clinical applications. Therefore, we first discuss the rationale and the practical usage of three recently proposed network-based Cox regression algorithms (i.e., Net-Cox, AdaLnet and fastcox). Then, we show how to combine existing biological knowledge and available data with such algorithms to identify networks of cancer biomarkers and to estimate survival patients. Finally, we describe in detail a new permutation-based approach to better validate the significance of the selection in terms of cancer gene signatures and pathway/networks identification. We illustrate the proposed methodology by means of both simulations and real case studies. Overall, the aim of our work is two-fold. Firstly, to show how network-based Cox regression models can be used to integrate biological knowledge (e.g. multi-omics data) for the analysis of survival data. Secondly, to provide a clear methodological and computational approach for investigating cancers regulatory networks.

20 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered a random trial-based telegraph process, which describes a motion on the real line with two constant velocities along opposite directions, and investigated the probability law of the process and the mean of the velocity of the moving particle.
Abstract: We consider a random trial-based telegraph process, which describes a motion on the real line with two constant velocities along opposite directions. At each epoch of the underlying counting process the new velocity is determined by the outcome of a random trial. Two schemes are taken into account: Bernoulli trials and classical Polya urn trials. We investigate the probability law of the process and the mean of the velocity of the moving particle. We finally discuss two cases of interest: (i) the case of Bernoulli trials and intertimes having exponential distributions with linear rates (in which, interestingly, the process exhibits a logistic stationary density with non-zero mean), and (ii) the case of Polya trials and intertimes having first Gamma and then exponential distributions with constant rates.

18 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Genetically modifying AAV vectors to increase their transduction efficiency, vector tropism and ability to avoid the host immune response may further increase the success of AAV gene therapy.
Abstract: Adeno-associated virus (AAV) vector-mediated gene delivery was recently approved for the treatment of inherited blindness and spinal muscular atrophy, and long-term therapeutic effects have been achieved for other rare diseases, including haemophilia and Duchenne muscular dystrophy. However, current research indicates that the genetic modification of AAV vectors may further facilitate the success of AAV gene therapy. Vector engineering can increase AAV transduction efficiency (by optimizing the transgene cassette), vector tropism (using capsid engineering) and the ability of the capsid and transgene to avoid the host immune response (by genetically modifying these components), as well as optimize the large-scale production of AAV. Adeno-associated virus (AAV) vector-mediated gene delivery has had long-term therapeutic effects for several diseases, including haemophilia and Duchenne muscular dystrophy. Genetically modifying AAV vectors to increase their transduction efficiency, vector tropism and ability to avoid the host immune response may further increase the success of AAV gene therapy.

487 citations

Journal ArticleDOI
TL;DR: In the last decades, transcriptome profiling has been one of the most utilized approaches to investigate human diseases at the molecular level and total RNA sequencing has completely revolutionized transcriptome analysis allowing the quantification of gene expression levels and allele-specific expression in a single experiment.
Abstract: In the last decades, transcriptome profiling has been one of the most utilized approaches to investigate human diseases at the molecular level. Through expression studies, many molecular biomarkers and therapeutic targets have been found for several human pathologies. This number is continuously increasing thanks to total RNA sequencing. Indeed, this new technology has completely revolutionized transcriptome analysis allowing the quantification of gene expression levels and allele-specific expression in a single experiment, as well as to identify novel genes, splice isoforms, fusion transcripts, and to investigate the world of non-coding RNA at an unprecedented level. RNA sequencing has also been employed in important projects, like ENCODE (Encyclopedia of the regulatory elements) and TCGA (The Cancer Genome Atlas), to provide a snapshot of the transcriptome of dozens of cell lines and thousands of primary tumor specimens. Moreover, these studies have also paved the way to the development of data integration approaches in order to facilitate management and analysis of data and to identify novel disease markers and molecular targets to use in the clinics. In this scenario, several ongoing clinical trials utilize transcriptome profiling through RNA sequencing strategies as an important instrument in the diagnosis of numerous human pathologies.

139 citations

01 Jan 1977
TL;DR: The fourth edition of the Fourth Edition of the Statistical Analysis of Biostatistics focuses on the application of Sampling Methods in Biomedical Studies, with a focus on the treatment of Variances and Standard Deviations.
Abstract: Preface to the Fourth Edition. 1 Initial Steps . 1.1 Reasons for Studying Biostatistics. 1.2 Initial Steps in Designing a Biomedical Study. 1.3 Common Types of Biomedical Studies. Problems. References. 2 Populations and Samples. 2.1 Basic Concepts. 2.2 Definitions of Types of Samples. 2.3 Methods of Selecting Simple Random Samples. 2.4 Application of Sampling Methods in Biomedical Studies. Problems. References. 3 Collecting and Entering Data. 3.1 Initial Steps. 3.2 Data Entry. 3.3 Screening the Data. 3.4 Code Book. Problems. References. 4 Frequency Tables and Their Graphs. 4.1 Numerical Methods of Organizing Data. 4.2 Graphs. Problems. References. 5 Measures of Location and Variability. 5.1 Measures of Location. 5.2 Measures of Variability. 5.3 Sampling Properties of the Mean and Variance. 5.4 Considerations in Selecting Appropriate Statistics. 5.5 A Common Graphical Method for Displaying Statistics. Problems. References. 6 The Normal Distribution. 6.1 Properties of the Normal Distribution. 6.2 Areas Under the Normal Curve. 6.3 Importance of the Normal Distribution. 6.4 Examining Data for Normality. 6.5 Transformations. Problems. References. 7 Estimation of Population Means: Confidence Intervals. 7.1 Confidence Intervals. 7.2 Sample Size Needed for a Desired Confidence Interval. 7.3 The t Distribution. 7.4 Confidence Interval for the Mean, Using the t Distribution. 7.5 Estimating the Difference Between Two Means: Unpaired Data. 7.6 Estimating the Difference Between Two Means: Paired Comparison. Problems. References. 8 Tests of Hypotheses on Population Means. 8.1 Tests of Hypotheses for a Single Mean. 8.2 Tests for Equality of two Means: Unpaired Data. 8.3 Testing for Equality of Means: Paired Data. 8.4 Concepts Used in Statistical Testing. 8.5 Sample Size. 8.6 Confidence Intervals Versus Tests. 8.7 Correcting for Multiple Testing. 8.8 Reporting the Results. Problems. References. 9 Variances: Estimation and Tests. 9.1 Point Estimates for Variances and Standard Deviations. 9.2 Testing Whether Two Variances Are Equal: F Test. 9.3 Approximate t Test. 9.4 Other Tests. Problems. References. 10 Categorical Data: Proportions. 10.1 Single Population Proportion. 10.2 Samples from Categorical Data. 10.3 The Normal Approximation to the Binomial. 10.4 Confidence Intervals for a Single Population Proportion. 10.5 Confidence Intervals for the Difference in Two Proportions. 10.6 Tests of Hypothesis for Population Proportions. 10.7 Sample Size for Testing Two Proportions. 10.8 Data Entry and Analysis Using Statistical Programs. Problems. References. 11 Categorical Data: Analysis of Two-Way Frequency Tables. 11.1 Different Types of Tables. 11.2 Relative Risk and Odds Ratio. 11.3 Chi-Square Tests for Frequency Tables: two-by-two Tables. 11.4 Chi-Square Tests for Larger Tables. 11.5 Remarks. Problems. References. 12 Regression and Correlation. 12.1 The Scatter Diagram: Single Sample. 12.2 Linear Regression: Single Sample. 12.3 The Correlation Coefficient for two Variables from a Single Sample. 12.4 Linear Regression Assuming the Fixed-X Model. 12.5 Other Topics in Linear Regression. Problems. References. 13 Nonparametric Statistics. 13.1 The Sign Test. 13.2 The Wilcoxon Signed Rank Test. 13.3 The Wilcoxon-Mann-Whitney Test. 13.4 Spearman's Rank Correlation. Problems. References. 14 Introduction to Survival Analysis. 14.1 Survival Analysis Data. 14.2 Survival Functions. 14.3 Computing Estimates of f(t), S(t), and h(t). 14.4 Comparison of Clinical Life Tables and the Kaplan-Meier Method. 14.5 Additional Analyses Using Survival Data. Problems. References. Appendix A: Statistical Tables. Appendix B: Answers to Selected Problems. Appendix C: Computer Statistical Program Resources. C.1 Computer Systems for Biomedical Education and Research. C.2 A Brief Indication of Statistics Computer Program Advances and Some Relevant Publications Since 2000. C.3 Choices of Computer Statistical Software. Bibliography. Index.

126 citations

Journal ArticleDOI
TL;DR: This review provides a comprehensive survey of the various chemical and enzymatic methods now available for the manufacture of custom proteins containing noncoded elements and discusses the commonalities that exist between these seemingly disparate methods.
Abstract: Protein semisynthesis-defined herein as the assembly of a protein from a combination of synthetic and recombinant fragments-is a burgeoning field of chemical biology that has impacted many areas in the life sciences. In this review, we provide a comprehensive survey of this area. We begin by discussing the various chemical and enzymatic methods now available for the manufacture of custom proteins containing noncoded elements. This section begins with a discussion of methods that are more chemical in origin and ends with those that employ biocatalysts. We also illustrate the commonalities that exist between these seemingly disparate methods and show how this is allowing for the development of integrated chemoenzymatic methods. This methodology discussion provides the technical foundation for the second part of the review where we cover the great many biological problems that have now been addressed using these tools. Finally, we end the piece with a short discussion on the frontiers of the field and the opportunities available for the future.

122 citations