scispace - formally typeset
Search or ask a question

Showing papers in "Methods of Molecular Biology in 2022"


Book ChapterDOI
TL;DR: In this paper, a historical background of miRNAs, exogenous and endogenous miRNA sponges as well as some examples of endogenous miRNs involved in regulatory mechanisms associated with various diseases, developmental stages, and other cellular processes are presented.
Abstract: MicroRNAs (miRNAs) are a class of noncoding RNAs of 17-22 nucleotides in length with a critical function in posttranscriptional gene regulation. These master regulators are themselves subject to regulation both transcriptionally and posttranscriptionally. Recently, miRNA function has been shown to be modulated by exogenous RNA molecules that function as miRNA sponges. Interestingly, endogenous transcripts such as transcribed pseudogenes, long noncoding RNAs (lncRNAs), circular RNAs (circRNAs) and mRNAs may serve as natural miRNA sponges. These transcripts, which bind to miRNAs and competitively sequester them away from their targets, are naturally existing endogenous miRNA sponges, called competing endogenous RNAs (ceRNAs). Here we present a historical background of miRNAs, exogenous and endogenous miRNA sponges as well as some examples of endogenous miRNA sponges involved in regulatory mechanisms associated with various diseases, developmental stages, and other cellular processes.

30 citations


Book ChapterDOI
TL;DR: In this article, the authors focus on addressing two main questions: "How are oncogenes and/or tumor suppressor genes regulated by microRNAs?" and "Which other mechanisms in cancer cells are regulated by miRNAs?".
Abstract: Cancer is also determined by the alterations of oncogenes and tumor suppressor genes. These gene expressions can be regulated by microRNAs (miRNA). At this point, researchers focus on addressing two main questions: "How are oncogenes and/or tumor suppressor genes regulated by miRNAs?" and "Which other mechanisms in cancer cells are regulated by miRNAs?" In this work we focus on gathering the publications answering these questions. The expression of miRNAs is affected by amplification, deletion or mutation. These processes are controlled by oncogenes and tumor suppressor genes, which regulate different mechanisms of cancer initiation and progression including cell proliferation, cell growth, apoptosis, DNA repair, invasion, angiogenesis, metastasis, drug resistance, metabolic regulation, and immune response regulation in cancer cells. In addition, profiling of miRNA is an important step in developing a new therapeutic approach for cancer.

23 citations


Book ChapterDOI
TL;DR: In this paper, both fixed-and random-effect meta-analysis models are compared, with the fixed-effect model assuming that all studies share a single common effect and all of the variance in observed effect sizes is attributable to sampling error.
Abstract: Deciding whether to use a fixed-effect model or a random-effects model is a primary decision an analyst must make when combining the results from multiple studies through meta-analysis. Both modeling approaches estimate a single effect size of interest. The fixed-effect meta-analysis assumes that all studies share a single common effect and, as a result, all of the variance in observed effect sizes is attributable to sampling error. The random-effects meta-analysis estimates the mean of a distribution of effects, thus assuming that study effect sizes vary from one study to the next. Under this model, variance in observed effect sizes is attributable to both sampling error (within-study variance) and statistical heterogeneity (between-study variance).The most popular meta-analyses involve using a weighted average to combine the study-level effect sizes. Both fixed- and random-effects models use an inverse-variance weight (variance of the observed effect size). However, given the shared between-study variance used in the random-effects model, it leads to a more balanced distribution of weights than under the fixed-effect model (i.e., small studies are given more relative weight and large studies less). The standard error for these estimators also relates to the inverse-variance weights. As such, the standard errors and confidence intervals for the random-effects model are larger and wider than in the fixed-effect analysis. Indeed, in the presence of statistical heterogeneity, fixed-effect models can lead to overly narrow intervals.In addition to commonly used, generalizable models, there are additional fixed-effect models and random-effect models that can be considered. Additional fixed-effect models that are specific to dichotomous data are more robust to issues that arise from sparse data. Furthermore, random-effects models can be expanded upon using generalized linear mixed models so that different covariance structures are used to distribute statistical heterogeneity across multiple parameters. Finally, both fixed- and random-effects modeling can be conducted using a Bayesian framework.

20 citations


Book ChapterDOI
TL;DR: The authors consider methods to estimate the heterogeneity variance parameter in a random-effects model, consider in more detail what this parameter represents and how the possible explanations for heterogeneity can be explored through statistical methods, and discuss publication bias as an alternative explanation for why observed effect estimates might form some distribution other than what we might come to expect.
Abstract: The random-effects model allows for the possibility that studies in a meta-analysis have heterogeneous effects. That is, observed study estimates vary not only due to random sampling error but also due to inherent differences in the way studies have been designed and conducted. In this chapter, we consider methods to estimate the heterogeneity variance parameter in a random-effects model, consider in more detail what this parameter represents and how the possible explanations for heterogeneity can be explored through statistical methods. Toward the end of this chapter, publication bias is discussed as an alternative explanation for why observed effect estimates might form some distribution other than what we might come to expect.

17 citations


Book ChapterDOI
TL;DR: In this paper, the authors introduce classic approaches based on the inverse variance method as well as generalized linear mixed models taking the binary structure of the data into account for pooling single proportions and comparing proportions from two groups.
Abstract: The meta-analysis of single proportions has become a popular application over the last two decades. Especially, systematic reviews of prevalence studies are conducted in various fields of science, including medicine, ecology, psychology, or social sciences. In this chapter, we illustrate meta-analysis methods to pool single proportions and to compare proportions from two groups. We introduce classic approaches based on the inverse variance method as well as generalized linear mixed models taking the binary structure of the data into account. The most common transformations of proportions and their back-transformations are described both for individual studies and in the meta-analysis setting.

17 citations


Book ChapterDOI
TL;DR: In this paper, the authors define developability as the likelihood of an antibody candidate with suitable functionality to be developed into a manufacturable, stable, safe, and effective drug that can be formulated to high concentrations while retaining a long shelf life.
Abstract: Although antibodies have become the fastest-growing class of therapeutics on the market, it is still challenging to develop them for therapeutic applications, which often require these molecules to withstand stresses that are not present in vivo. We define developability as the likelihood of an antibody candidate with suitable functionality to be developed into a manufacturable, stable, safe, and effective drug that can be formulated to high concentrations while retaining a long shelf life. The implementation of reliable developability assessments from the early stages of antibody discovery enables flagging and deselection of potentially problematic candidates, while focussing available resources on the development of the most promising ones. Currently, however, thorough developability assessment requires multiple in vitro assays, which makes it labor intensive and time consuming to implement at early stages. Furthermore, accurate in vitro analysis at the early stage is compromised by the high number of potential candidates that are often prepared at low quantities and purity. Recent improvements in the performance of computational predictors of developability potential are beginning to change this scenario. Many computational methods only require the knowledge of the amino acid sequences and can be used to identify possible developability issues or to rank available candidates according to a range of biophysical properties. Here, we describe how the implementation of in silico tools into antibody discovery pipelines is increasingly offering time- and cost-effective alternatives to in vitro experimental screening, thus streamlining the drug development process. We discuss in particular the biophysical and biochemical properties that underpin developability potential and their trade-offs, review various in vitro assays to measure such properties or parameters that are predictive of developability, and give an overview of the growing number of in silico tools available to predict properties important for antibody development, including the CamSol method developed in our laboratory.

16 citations


Journal ArticleDOI
TL;DR: In this article, an updated view on de novo design approaches based on artificial intelligent (AI) algorithms, with a particular focus on ligand-based methods, is presented.
Abstract: In the latest years, the application of deep generative models to suggest virtual compounds is becoming a new and powerful tool in drug discovery projects. The idea behind this review is to offer an updated view on de novo design approaches based on artificial intelligent (AI) algorithms, with a particular focus on ligand-based methods. We start this review by reporting a brief overview of the most relevant de novo design approaches developed before the use of AI techniques. We then describe the nowadays most common neural network architectures employed in ligand-based de novo design, together with an up-to-date list of more than 100 deep generative models found in the literature (2017-2020). In order to show how deep generative approaches are applied into drug discovery context, we report all the now available studies in which generated compounds have been synthetized and their biological activity tested. Finally, we discuss what we envisage as beneficial future directions for further application of deep generative models in de novo drug design.

15 citations


Journal ArticleDOI
Lei Jia1, Hua Gao
TL;DR: In this article, various machine learning or quantitative structure-activity relationship (QSAR) methods have been successfully integrated in the modeling of ADMET, which describes a drug molecule's pharmacokinetics and pharmacodynamics properties.
Abstract: ADMET (absorption, distribution, metabolism, excretion, and toxicity) describes a drug molecule's pharmacokinetics and pharmacodynamics properties. ADMET profile of a bioactive compound can impact its efficacy and safety. Moreover, efficacy and safety are considered some of the major causes of clinical attrition in the development of new chemical entities. In past decades, various machine learning or quantitative structure-activity relationship (QSAR) methods have been successfully integrated in the modeling of ADMET. Recent advances have been made in the collection of data and the development of various in silico methods to assess and predict ADMET of bioactive compounds in the early stages of drug discovery and development process.

15 citations


Journal ArticleDOI
TL;DR: In this article, the basic guidelines for designing suitable primers for PCR amplification and validation of circRNAs are presented. But only a handful of them are validated with other techniques, including northern blotting, gel-trap electrophoresis, exonuclease treatment assays, and polymerase chain reaction (PCR).
Abstract: High-throughput RNA-sequencing (RNA-seq) technologies combined with novel bioinformatic algorithms discovered a large class of covalently closed single-stranded RNA molecules called circular RNAs (circRNAs ). Although RNA-seq has identified more than a million circRNAs, only a handful of them is validated with other techniques, including northern blotting, gel-trap electrophoresis, exonuclease treatment assays, and polymerase chain reaction (PCR). Reverse transcription (RT) of total RNA followed by PCR amplification is the most widely used technique for validating circRNAs identified in RNA-seq. RT-PCR is a highly reproducible, sensitive, and quantitative method for the detection and quantitation of circRNAs. This chapter details the basic guidelines for designing suitable primers for PCR amplification and validation of circRNAs .

14 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define cell-penetrating peptides (CPPs), give short overview of CPP history and discuss several aspects of CMP classification, and present different cargoes that can be transferred into the cells by CPPs and briefly discuss the effect of cargo on the rate and efficiency of penetration.
Abstract: In this introductory chapter, we first define cell-penetrating peptides (CPPs), give short overview of CPP history and discuss several aspects of CPP classification. Next section is devoted to the mechanism of CPP penetration into the cells, where direct and endocytic internalization of CPP is explained. Kinetics of internalization is discussed more extensively, since this topic is not discussed in other chapters of this book. At the end of this section some features of the thermodynamics of CPP interaction with the membrane is also presented. Finally, we present different cargoes that can be transferred into the cells by CPPs and briefly discuss the effect of cargo on the rate and efficiency of penetration into the cells.

12 citations


Book ChapterDOI
TL;DR: In this paper, the authors seek answers to how miRNA biogenesis and function are regulated through both canonical and non-canonical pathways, and propose different regulation mechanisms in noncanonical Drosha or Dicer-independent pathways.
Abstract: MicroRNAs are RNAs of about 18-24 nucleotides in lengths, which are found in the small noncoding RNA class and have a crucial role in the posttranscriptional regulation of gene expression, cellular metabolic pathways, and developmental events. These small but essential molecules are first processed by Drosha and DGCR8 in the nucleus and then released into the cytoplasm, where they cleaved by Dicer to form the miRNA duplex. These duplexes are bound by the Argonaute (AGO) protein to form the RNA-induced silencing complex (RISC) in a process called RISC loading. Transcription of miRNAs, processing with Drosha and DGCR8 in the nucleus, cleavage by Dicer, binding to AGO proteins and forming RISC are the most critical steps in miRNA biogenesis. Additional molecules involved in biogenesis at these stages can enhance or inhibit these processes, which can radically change the fate of the cell. Biogenesis is regulated by many checkpoints at every step, primarily at the transcriptional level, in the nucleus, cytoplasm, with RNA regulation, RISC loading, miRNA strand selection, RNA methylation/uridylation, and turnover rate. Moreover, in recent years, different regulation mechanisms have been discovered in noncanonical Drosha or Dicer-independent pathways. This chapter seeks answers to how miRNA biogenesis and function are regulated through both canonical and non-canonical pathways.

Book ChapterDOI
TL;DR: The concept and processes of living systematic reviews (LSR) are described in this article, where the authors focus on two methods of sequential meta-analysis that may be particularly useful for LSRs.
Abstract: Systematic reviews are difficult to keep up to date, but failure to do so leads to poor review currency and accuracy. "Living systematic review" (LSR) is an approach that aims to continually update a review, incorporating relevant new evidence as it becomes available. LSRs may be particularly important in fields where research evidence is emerging rapidly, current evidence is uncertain, and new research may change policy or practice decisions.This chapter describes the concept and processes of living systematic reviews. It describes the general principles of LSRs, when they might be of particular value, and how their procedures differ from conventional systematic reviews. The chapter focuses particularly on two methods of sequential meta-analysis that may be particularly useful for LSRs: Trial Sequential Analysis and Sequential Meta-Analysis, which both control for Type I error, Type II error (failing to detect a genuine effect) and take account of heterogeneity.

Book ChapterDOI
TL;DR: In this paper, two different photolithography methods are described: liquid and dry photolithographic methods, which are used to produce polymer-based microdevices. But neither of them can be used for fabrication.
Abstract: Organs-on-Chip devices are generally fabricated by means of photo- and soft lithographic techniques. Photolithography is a process that involves the transfer of a pattern onto a substrate by a selective exposure to light. In particular, in this chapter two different photolithography methods will be described: liquid and dry photolithography. In liquid photolithography, a silicon wafer is spin-coated with liquid photoresist and exposed to UV light in order to be patterned. In dry photolithography, the silicon wafer is laminated with resist dry film before being patterned through UV light. In both cases, the UV light can be collimated on top of the wafer either through photomasks or by direct laser exposure. The obtained patterned wafer is then used as a mold for the soft lithographic process (i.e., replica molding) to produce polymer-based microdevices.

Book ChapterDOI
TL;DR: In this paper, the authors summarized the current knowledge on the mechanisms and effects of PGK1 in various tumor types and evaluated its potential prognostic and therapeutic value in cancer, and provided molecular information and new ideas of employing natural products to combat cancer associated with PGK 1.
Abstract: Phosphoglycerate kinase 1 (PGK1) is the first enzyme in glycolysis to generate a molecule of ATP in the conversion of 1,3-bisphosphoglycerate (1,3-BPG) to 3-phosphoglycerate (3-PG). In addition to the role of glycolysis, PGK-1 acts as a polymerase alpha cofactor protein, with effects on the tricarboxylic acid cycle, DNA replication and repair. Posttranslational modifications such as methylation, phosphorylation, and acetylation have been seen to activate PGK1 in cancer. High levels of intracellular PGK1 are associated with tumorigenesis and progression, and chemoradiotherapy resistance. However, high levels of extracellular PGK1 suppress angiogenesis and subsequently counteract cancer malignancy. Here we have summarized the current knowledge on the mechanisms and effects of PGK1 in various tumor types and evaluated its potential prognostic and therapeutic value in cancer. The data summarized here aims at providing molecular information and new ideas of employing natural products to combat cancer associated with PGK1.

Book ChapterDOI
TL;DR: In contrast to pairwise meta-analysis, which allows for the comparison of one intervention to another based on head-to-head data from randomized trials, Network Meta-analysis (NMA) facilitates simultaneous comparison of the efficacy or safety of multiple interventions that may not have been directly compared in a randomized trial.
Abstract: There are often multiple potential interventions to treat a disease; therefore, we need a method for simultaneously comparing and ranking all of these available interventions. In contrast to pairwise meta-analysis, which allows for the comparison of one intervention to another based on head-to-head data from randomized trials, network meta-analysis (NMA) facilitates simultaneous comparison of the efficacy or safety of multiple interventions that may not have been directly compared in a randomized trial. NMAs help researchers study important and previously unanswerable questions, which have contributed to a rapid rise in the number of NMA publications in the biomedical literature. However, the conduct and interpretation of NMAs are more complex than pairwise meta-analyses: there are additional NMA model assumptions (i.e., network connectivity, homogeneity, transitivity, and consistency) and outputs (e.g., network plots and surface under the cumulative ranking curves [SUCRAs]). In this chapter, we will: (1) explore similarities and differences between pairwise and network meta-analysis; (2) explain the differences between direct, indirect, and mixed treatment comparisons; (3) describe how treatment effects are derived from NMA models; (4) discuss key criteria predicating completion of NMA; (5) interpret NMA outputs; (6) discuss areas of ongoing methodological research in NMA; (7) outline an approach to conducting a systematic review and NMA; (8) describe common problems that researchers encounter when conducting NMAs and potential solutions; and (9) outline an approach to critically appraising a systematic review and NMA.

Journal ArticleDOI
TL;DR: The FastPCR software is an integrated tool environment for PCR primer and probe design and for prediction of oligonucleotide properties and includes various bioinformatics tools for analysis and searching of sequences, restriction I-II-III-type enzyme endonuclease analysis, and pattern searching.
Abstract: The FastPCR software is an integrated tool environment for PCR primer and probe design and for prediction of oligonucleotide properties. The software provides comprehensive tools for designing primers for most PCR and perspective applications, including standard, multiplex, long-distance, inverse, real-time with TaqMan probe, Xtreme Chain Reaction (XCR), group-specific, overlap extension PCR for multifragment assembling cloning, and isothermal amplification (Loop-mediated Isothermal Amplification). A program is available to design specific oligonucleotide sets for long sequence assembly by ligase chain reaction and to design multiplexed of overlapping and nonoverlapping DNA amplicons that tile across a region(s) of interest for targeted next-generation sequencing, competitive allele-specific PCR (KASP)-based genotyping assay for single-nucleotide polymorphisms and insertions and deletions at specific loci, among other features. The in silico PCR primer or probe search includes comprehensive analyses of individual primers and primer pairs. FastPCR includes various bioinformatics tools for analysis and searching of sequences, restriction I-II-III-type enzyme endonuclease analysis, and pattern searching. The program also supports the assembly of a set of contiguous sequences, consensus sequence generation, and sequence similarity and conservancy analysis. FastPCR performs efficient and complete detection of various repeat types with visual display. FastPCR allows for sequence file batch processing that is essential for automation. The software is available for download at https://primerdigital.com/fastpcr.html and online version at https://primerdigital.com/tools/pcr.html .

Journal ArticleDOI
TL;DR: A brief overview of recent advances in ATV development, ATVs, ATV effector mechanisms, and anti-tick RV can be found in this article, which provides a detailed outline of vaccine antigen selection and analysis using computational methods.
Abstract: Ticks are increasingly a global public health and veterinary concern. They transmit numerous pathogens that are of veterinary and public health importance. Acaricides, livestock breeding for tick resistance, tick handpicking, pasture spelling, and anti-tick vaccines (ATVs) are in use for the control of ticks and tick-borne diseases (TTBDs); acaricides and ATVs being the most and least used TTBD control methods respectively. The overuse and misuse of acaricides has inadvertently selected for tick strains that are resistant to acaricides. Furthermore, vaccines are rare and not commercially available in sub-Saharan Africa (SSA). It doesn't help that many of the other methods are labor-intensive and found impractical especially for larger farm operations. The success of TTBD control is therefore dependent on integrating all the currently available methods. Vaccines have been shown to be cheap and effective. However, their large-scale deployment for TTBD control in SSA is hindered by commercial unavailability of efficacious anti-tick vaccines against sub-Saharan African tick strains. Thanks to advances in genomics, transcriptomics, and proteomics technologies, many promising anti-tick vaccine antigens (ATVA) have been identified. However, few of them have been investigated for their potential as ATV candidates. Reverse vaccinology (RV) can be leveraged to accelerate ATV discovery. It is cheap and shortens the lead time from ATVA discovery to vaccine production. This chapter provides a brief overview of recent advances in ATV development, ATVs, ATV effector mechanisms, and anti-tick RV. Additionally, it provides a detailed outline of vaccine antigen selection and analysis using computational methods.

Journal ArticleDOI
TL;DR: In this article, the authors present a historical overview of the main advances in the field and highlight some of the recent applications of generative models in drug design with a focus into research work from the biopharmaceutical industry.
Abstract: Artificial intelligence (AI) tools find increasing application in drug discovery supporting every stage of the Design-Make-Test-Analyse (DMTA) cycle. The main focus of this chapter is the application in molecular generation with the aid of deep neural networks (DNN). We present a historical overview of the main advances in the field. We analyze the concepts of distribution and goal-directed learning and then highlight some of the recent applications of generative models in drug design with a focus into research work from the biopharmaceutical industry. We present in some more detail REINVENT which is an open-source software developed within our group in AstraZeneca and the main platform for AI molecular design support for a number of medicinal chemistry projects in the company and we also demonstrate some of our work in library design. Finally, we present some of the main challenges in the application of AI in Drug Discovery and different approaches to respond to these challenges which define areas for current and future work.

Journal ArticleDOI
TL;DR: In this article, a review of recent applications of AI to problems in drug design including virtual screening, computer-aided synthesis planning, and de novo molecule generation, with a focus on the limitations of the application of AI therein and opportunities for improvement.
Abstract: Artificial intelligence (AI) has undergone rapid development in recent years and has been successfully applied to real-world problems such as drug design. In this chapter, we review recent applications of AI to problems in drug design including virtual screening, computer-aided synthesis planning, and de novo molecule generation, with a focus on the limitations of the application of AI therein and opportunities for improvement. Furthermore, we discuss the broader challenges imposed by AI in translating theoretical practice to real-world drug design; including quantifying prediction uncertainty and explaining model behavior.

Journal ArticleDOI
TL;DR: In this paper, a quantum vaccinomics approach was proposed based on the characterization of the immunological quantum to further advance the design of more effective and safe vaccines, which can then be used for the design and production of chimeric protective antigens.
Abstract: Vaccines are the most effective preventive intervention to reduce the impact of infectious diseases worldwide. In particular, tick-borne diseases represent a growing burden for human and animal health worldwide and vaccines are the most effective and environmentally sound approach for the control of vector infestations and pathogen transmission. However, the development of effective vaccines for the control of tick-borne diseases with combined vector-derived and pathogen-derived antigens is one of the limitations for the development of effective vaccine formulations. Quantum biology arise from findings suggesting that living cells operate under non-trivial features of quantum mechanics, which has been proposed to be involved in DNA mutation biological process. Then, the electronic structure of the molecular interactions behind peptide immunogenicity led to quantum immunology and based on the definition of the photon as a quantum of light, the immune protective epitopes were proposed as the immunological quantum. Recently, a quantum vaccinomics approach was proposed based on the characterization of the immunological quantum to further advance the design of more effective and safe vaccines. In this chapter, we describe methods of the quantum vaccinomics approach based on proteins with key functions in cell interactome and regulome of vector-host-pathogen interactions for the identification by yeast two-hybrid screen and the characterization by in vitro protein-protein interactions and musical scores of protein interacting domains, and the characterization of conserved protective epitopes in protein interacting domains. These results can then be used for the design and production of chimeric protective antigens.

Journal ArticleDOI
TL;DR: The UNRES model as mentioned in this paper is a physics-based unified residue model of proteins that has been designed to carry out large-scale simulations of protein folding, including ab initio and database-assisted protein-structure prediction, simulating protein-folding pathways, exploring protein free-energy landscapes, and solving biological problems.
Abstract: The physics-based united-residue (UNRES) model of proteins ( www.unres.pl ) has been designed to carry out large-scale simulations of protein folding. The force field has been derived and parameterized based on the principles of statistical-mechanics, which makes it independent of structural databases and applicable to treat nonstandard situations such as, proteins that contain D-amino-acid residues. Powered by Langevin dynamics and its replica-exchange extensions, UNRES has found a variety of applications, including ab initio and database-assisted protein-structure prediction, simulating protein-folding pathways, exploring protein free-energy landscapes, and solving biological problems. This chapter provides a summary of UNRES and a guide for potential users regarding the application of the UNRES package in a variety of research tasks.

Journal ArticleDOI
TL;DR: Ancestral Sequence Reconstruction (ASR) as discussed by the authors is a technique that allows one to infer the sequences of extinct proteins using the phylogeny of extant proteins by disclosing the evolutionary history of a protein family of interest.
Abstract: Ancestral Sequence Reconstruction (ASR) allows one to infer the sequences of extinct proteins using the phylogeny of extant proteins. It consists of disclosing the evolutionary history-i.e., the phylogeny-of a protein family of interest and then inferring the sequences of its ancestors-i.e., the nodes in the phylogeny. Assisted by gene synthesis, the selected ancestors can be resurrected in the lab and experimentally characterized. The crucial step to succeed with ASR is starting from a reliable phylogeny. At the same time, it is of the utmost importance to have a clear idea on the evolutionary history of the family under study and the events that influenced it. This allows us to implement ASR with well-defined hypotheses and to apply the appropriate experimental methods. In the last years, ASR has become popular to test hypotheses about the origin of functionalities, changes in activities, understanding physicochemical properties of proteins, among others. In this context, the aim of this chapter is to present the ASR approach applied to the reconstruction of enzymes-i.e., proteins with catalytic roles. The spirit of this contribution is to provide a basic, hands-to-work guide for biochemists and biologists who are unfamiliar with molecular phylogenetics.

Journal ArticleDOI
TL;DR: In this article, the authors describe several procedures that can be used for OMV/GMMA characterization as particles and for analysis of key antigens displayed on their surface, which is a promising platform for the development of vaccines against bacterial pathogens.
Abstract: Outer membrane vesicles (OMV) represent a promising platform for the development of vaccines against bacterial pathogens. More recently, bacteria have been genetically modified to increase OMV yield and modulate the design of resulting particles, also named generalized modules for membrane antigens (GMMA). OMV/GMMA resemble the bacterial surface of the pathogen, where key antigens to elicit a protective immune response are and contain pathogen-associated molecular patterns (e.g., lipopolysaccharides, lipoproteins) conferring self-adjuvanticity. On the other hand, OMV/GMMA are quite complex molecules and a comprehensive panel of analytical methods is needed to ensure quality, consistency of manufacture and to follow their stability over time. Here, we describe several procedures that can be used for OMV/GMMA characterization as particles and for analysis of key antigens displayed on their surface.

Journal ArticleDOI
TL;DR: In this paper, a modular, receptor-based measurement technology that is independent of the chemical reactivity of its targets, and thus has the potential to be generalizable to a wide range of analytes, is presented.
Abstract: The monitoring of specific molecules in the living body has historically required sample removal (e.g., blood draws, microdialysis) followed by analysis via cumbersome, laboratory-bound processes. Those few exceptions to this rule (e.g., glucose, pyruvate, the monoamines) are monitored using "one-off" technologies reliant on the specific enzymatic or redox reactivity of their targets, and thus not generalizable to the measurement of other targets. In response we have developed in vivo electrochemical aptamer-based (E-AB) sensors, a modular, receptor-based measurement technology that is independent of the chemical reactivity of its targets, and thus has the potential to be generalizable to a wide range of analytes. To further the adoption of this in vivo molecular measurement approach by other researchers and to accelerate its ultimate translation to the clinic, we present here our standard protocols for the fabrication and use of intravenous E-AB sensors.

Journal ArticleDOI
TL;DR: The current interest in machine learning algorithms based on deep neural networks encouraged the application of deep learning to Structure-Based Drug Design (SBDD) related problems as mentioned in this paper, which includes techniques that take into account the structure of the macromolecular target to predict compounds that are likely to establish optimal interactions with the binding site.
Abstract: Computational methods play an increasingly important role in drug discovery. Structure-based drug design (SBDD), in particular, includes techniques that take into account the structure of the macromolecular target to predict compounds that are likely to establish optimal interactions with the binding site. The current interest in machine learning algorithms based on deep neural networks encouraged the application of deep learning to SBDD related problems. This chapter covers selected works in this active area of research.

Book ChapterDOI
TL;DR: The development of technologies for production of specific monoclonal Abs (MAbs) in large amounts has led to the production of highly effective therapeutic antibodies (TAbs), a collective term for MAbs with demonstrated clinical efficacy in one or more diseases as discussed by the authors.
Abstract: Polyclonal immunoglobulin (Ig) preparations have been used for several decades for treatment of primary and secondary immunodeficiencies and for treatment of some infections and intoxications. This has demonstrated the importance of Igs, also called antibodies (Abs) for prevention and elimination of infections. Moreover, elucidation of the structure and functions of Abs has suggested that they might be useful for targeted treatment of several diseases, including cancers and autoimmune diseases. The development of technologies for production of specific monoclonal Abs (MAbs) in large amounts has led to the production of highly effective therapeutic antibodies (TAbs), a collective term for MAbs (MAbs) with demonstrated clinical efficacy in one or more diseases. The number of approved TAbs is currently around hundred, and an even larger number is under development, including several engineered and modified Ab formats. The use of TAbs has provided new treatment options for many severe diseases, but prediction of clinical effect is difficult, and many patients eventually lose effect, possibly due to development of Abs to the TAbs or to other reasons. The therapeutic efficacy of TAbs can be ascribed to one or more effects, including binding and neutralization of targets, direct cytotoxicity, Ab-dependent complement-dependent cytotoxicity, Ab-dependent cellular cytotoxicity or others. The therapeutic options for TAbs have been expanded by development of several new formats of TAbs, including bispecific Abs, single domain Abs, TAb-drug conjugates, and the use of TAbs for targeted activation of immune cells. Most promisingly, current research and development can be expected to increase the number of clinical conditions, which may benefit from TAbs.

Book ChapterDOI
TL;DR: In vitro cancer research models require the utmost accuracy and precision to effectively investigate physiological pathways and mechanisms, as well as test the therapeutic efficacy of anticancer drugs as discussed by the authors, however, two-dimensional (2D) cell culture models cannot accurately recapitulate complex aspects of tumor cells and drug responses.
Abstract: In vitro cancer research models require the utmost accuracy and precision to effectively investigate physiological pathways and mechanisms, as well as test the therapeutic efficacy of anticancer drugs. Although two-dimensional (2D) cell culture models have been the traditional hallmark of cancer research, increasing evidence suggests 2D tumor models cannot accurately recapitulate complex aspects of tumor cells and drug responses. Three-dimensional (3D) cell cultures, however, are more physiologically relevant in oncology as they model the cancer network and microenvironment better, allowing for development and assessment of natural products and other anticancer drugs. The present review outlines unprecedented ways in which multicellular spheroid models, organoid models, hydrogel models, microfluidic devices, microfiber scaffold models, and tissue-engineered scaffold models are used in this research. The future of cancer research lies within 3D cell cultures, and as this approach improves, cancer research will continue to advance.

Journal ArticleDOI
TL;DR: Graphical Representation of Ancestral Sequence Predictions (GRASP) as mentioned in this paper is an ASR tool that maps indel evolution throughout a reconstruction and enables the evaluation of indel variants.
Abstract: Analyzing the natural evolution of proteins by ancestral sequence reconstruction (ASR) can provide valuable information about the changes in sequence and structure that drive the development of novel protein functions. However, ASR has also been used as a protein engineering tool, as it often generates thermostable proteins which can serve as robust and evolvable templates for enzyme engineering. Importantly, ASR has the potential to provide an insight into the history of insertions and deletions that have occurred in the evolution of a protein family. Indels are strongly associated with functional change during enzyme evolution and represent a largely unexplored source of genetic diversity for designing proteins with novel or improved properties. Current ASR methods differ in the way they handle indels; inclusion or exclusion of indels is often managed subjectively, based on assumptions the user makes about the likelihood of each recombination event, yet most currently available ASR tools provide limited, if any, opportunities for evaluating indel placement in a reconstructed sequence. Graphical Representation of Ancestral Sequence Predictions (GRASP) is an ASR tool that maps indel evolution throughout a reconstruction and enables the evaluation of indel variants. This chapter provides a general protocol for performing a reconstruction using GRASP and using the results to create indel variants. The method addresses protein template selection, sequence curation, alignment refinement, tree building, ancestor reconstruction, evaluation of indel variants and approaches to library development.

Book ChapterDOI
Leonie J Kiely1, Rita M. Hickey1
TL;DR: In this paper, the importance of carbohydrate analytical techniques in the quest to identify and isolate health-promoting carbohydrates to be used as additives in the functional foods industry has been discussed.
Abstract: Food carbohydrates are macronutrients that are found in fruits, grains, vegetables, and milk products. These organic compounds are present in foods in the form of sugars, starches, and fibers and are composed of carbon, hydrogen, and oxygen. These wide ranging macromolecules can be classified according to their chemical structure into three major groups: low molecular weight mono- and disaccharides, intermediate molecular weight oligosaccharides, and high molecular weight polysaccharides. Notably, the digestibility of specific carbohydrate components differ and nondigestible carbohydrates can reach the large intestine intact where they act as food sources for beneficial bacteria. In this review, we give an overview of advances made in food carbohydrate analysis. Overall, this review indicates the importance of carbohydrate analytical techniques in the quest to identify and isolate health-promoting carbohydrates to be used as additives in the functional foods industry.

Book ChapterDOI
TL;DR: There are a number of well-established miRNA detection methods that can be exploited depending on the comprehensiveness of the study (individual miRNA versus multiplex analysis), the availability of the sample and the location and intracellular concentration of miRNAs as mentioned in this paper.
Abstract: MicroRNAs (miRNAs) are considerably small yet highly important riboregulators involved in nearly all cellular processes. Due to their critical roles in posttranscriptional regulation of gene expression, they have the potential to be used as biomarkers in addition to their use as drug targets. Although computational approaches speed up the initial genomewide identification of putative miRNAs, experimental approaches are essential for further validation and functional analyses of differentially expressed miRNAs. Therefore, sensitive, specific, and cost-effective microRNA detection methods are imperative for both individual and multiplex analysis of miRNA expression in different tissues and during different developmental stages. There are a number of well-established miRNA detection methods that can be exploited depending on the comprehensiveness of the study (individual miRNA versus multiplex analysis), the availability of the sample and the location and intracellular concentration of miRNAs. This review aims to highlight not only traditional but also novel strategies that are widely used in experimental identification and quantification of microRNAs.