Douglas B. Kell
Other affiliations: Max Planck Society, University of Wales, Heidelberg Institute for Theoretical Studies ...read more
Bio: Douglas B. Kell is an academic researcher from University of Liverpool. The author has contributed to research in topic(s): Dielectric & Systems biology. The author has an hindex of 111, co-authored 634 publication(s) receiving 50335 citation(s). Previous affiliations of Douglas B. Kell include Max Planck Society & University of Wales.
Topics: Dielectric, Systems biology, Population, Metabolome, Fibrin
Papers published on a yearly basis
01 Jun 2011-Nature Protocols
TL;DR: The experimental workflow for long-term and large-scale metabolomic studies involving thousands of human samples with data acquired for multiple analytical batches over many months and years is described.
Abstract: Metabolism has an essential role in biological systems. Identification and quantitation of the compounds in the metabolome is defined as metabolic profiling, and it is applied to define metabolic changes related to genetic differences, environmental influences and disease or drug perturbations. Chromatography-mass spectrometry (MS) platforms are frequently used to provide the sensitive and reproducible detection of hundreds to thousands of metabolites in a single biofluid or tissue sample. Here we describe the experimental workflow for long-term and large-scale metabolomic studies involving thousands of human samples with data acquired for multiple analytical batches over many months and years. Protocols for serum- and plasma-based metabolic profiling applying gas chromatography-MS (GC-MS) and ultraperformance liquid chromatography-MS (UPLC-MS) are described. These include biofluid collection, sample preparation, data acquisition, data pre-processing and quality assurance. Methods for quality control-based robust LOESS signal correction to provide signal correction and integration of data from multiple analytical batches are also described.
01 May 2004-Trends in Biotechnology
TL;DR: In this postgenomic era, there is a specific need to assign function to orphan genes in order to validate potential targets for drug therapy and to discover new biomarkers of disease.
Abstract: In this postgenomic era, there is a specific need to assign function to orphan genes in order to validate potential targets for drug therapy and to discover new biomarkers of disease. Metabolomics is an emerging field that is complementary to the other ‘omics and proving to have unique advantages. As in transcriptomics or proteomics, a typical metabolic fingerprint or metabolomic experiment is likely to generate thousands of data points, of which only a handful might be needed to describe the problem adequately. Extracting the most meaningful elements of these data is thus key to generating useful new knowledge with mechanistic or explanatory power.
TL;DR: Single-cell time-lapse imaging and computational modeling of NF-κB (RelA) localization showed asynchronous oscillations following cell stimulation that decreased in frequency with increased IκBα transcription.
Abstract: Signaling by the transcription factor nuclear factor kappa B (NF-kappaB) involves its release from inhibitor kappa B (IkappaB) in the cytosol, followed by translocation into the nucleus. NF-kappaB regulation of IkappaBalpha transcription represents a delayed negative feedback loop that drives oscillations in NF-kappaB translocation. Single-cell time-lapse imaging and computational modeling of NF-kappaB (RelA) localization showed asynchronous oscillations following cell stimulation that decreased in frequency with increased IkappaBalpha transcription. Transcription of target genes depended on oscillation persistence, involving cycles of RelA phosphorylation and dephosphorylation. The functional consequences of NF-kappaB signaling may thus depend on number, period, and amplitude of oscillations.
01 Sep 1998-Trends in Biotechnology
TL;DR: The genome sequence of the yeast Saccharomyces cerevisiae has provided the first complete inventory of the working parts of a eukaryotic cell, and systematic and comprehensive approaches to the elucidation of yeast gene function are discussed.
Abstract: The genome sequence of the yeast Saccharomyces cerevisiae has provided the first complete inventory of the working parts of a eukaryotic cell The challenge is now to discover what each of the gene products does and how they interact in a living yeast cell Systematic and comprehensive approaches to the elucidation of yeast gene function are discussed and the prospects for the functional genomics of eukaryotic organisms evaluated
01 Jan 2001-Nature Biotechnology
TL;DR: It is demonstrated how the intracellular concentrations of metabolites can reveal phenotypes for proteins active in metabolic regulation, and this approach to functional analysis, using comparative metabolomics, is called FANCY—an abbreviation for functional analysis by co-responses in yeast.
Abstract: A large proportion of the 6,000 genes present in the genome of Saccharomyces cerevisiae, and of those sequenced in other organisms, encode proteins of unknown function. Many of these genes are "silent," that is, they show no overt phenotype, in terms of growth rate or other fluxes, when they are deleted from the genome. We demonstrate how the intracellular concentrations of metabolites can reveal phenotypes for proteins active in metabolic regulation. Quantification of the change of several metabolite concentrations relative to the concentration change of one selected metabolite can reveal the site of action, in the metabolic network, of a silent gene. In the same way, comprehensive analyses of metabolite concentrations in mutants, providing "metabolic snapshots," can reveal functions when snapshots from strains deleted for unstudied genes are compared to those deleted for known genes. This approach to functional analysis, using comparative metabolomics, we call FANCY—an abbreviation for functional analysis by co-responses in yeast.
28 Jul 2005
TL;DR: A simple and highly efficient method to disrupt chromosomal genes in Escherichia coli in which PCR primers provide the homology to the targeted gene(s), which should be widely useful, especially in genome analysis of E. coli and other bacteria.
Abstract: We have developed a simple and highly efficient method to disrupt chromosomal genes in Escherichia coli in which PCR primers provide the homology to the targeted gene(s). In this procedure, recombination requires the phage lambda Red recombinase, which is synthesized under the control of an inducible promoter on an easily curable, low copy number plasmid. To demonstrate the utility of this approach, we generated PCR products by using primers with 36- to 50-nt extensions that are homologous to regions adjacent to the gene to be inactivated and template plasmids carrying antibiotic resistance genes that are flanked by FRT (FLP recognition target) sites. By using the respective PCR products, we made 13 different disruptions of chromosomal genes. Mutants of the arcB, cyaA, lacZYA, ompR-envZ, phnR, pstB, pstCA, pstS, pstSCAB-phoU, recA, and torSTRCAD genes or operons were isolated as antibiotic-resistant colonies after the introduction into bacteria carrying a Red expression plasmid of synthetic (PCR-generated) DNA. The resistance genes were then eliminated by using a helper plasmid encoding the FLP recombinase which is also easily curable. This procedure should be widely useful, especially in genome analysis of E. coli and other bacteria because the procedure can be done in wild-type cells.
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON
01 Dec 1996-ACM Computing Surveys
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.