scispace - formally typeset
Search or ask a question
Author

Andreas Schuppert

Other affiliations: Bayer, Cambium Learning Group
Bio: Andreas Schuppert is an academic researcher from RWTH Aachen University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 21, co-authored 115 publications receiving 1511 citations. Previous affiliations of Andreas Schuppert include Bayer & Cambium Learning Group.


Papers
More filters
Journal ArticleDOI
TL;DR: The potential of state-of-the-art data science approaches for personalized medicine is reviewed, open challenges are discussed, and directions that may help to overcome them in the future are highlighted.
Abstract: Personalized, precision, P4, or stratified medicine is understood as a medical approach in which patients are stratified based on their disease subtype, risk, prognosis, or treatment response using specialized diagnostic tests. The key idea is to base medical decisions on individual patient characteristics, including molecular and behavioral biomarkers, rather than on population averages. Personalized medicine is deeply connected to and dependent on data science, specifically machine learning (often named Artificial Intelligence in the mainstream media). While during recent years there has been a lot of enthusiasm about the potential of ‘big data’ and machine learning-based solutions, there exist only few examples that impact current clinical practice. The lack of impact on clinical practice can largely be attributed to insufficient performance of predictive models, difficulties to interpret complex model predictions, and lack of validation via prospective clinical trials that demonstrate a clear benefit compared to the standard of care. In this paper, we review the potential of state-of-the-art data science approaches for personalized medicine, discuss open challenges, and highlight directions that may help to overcome them in the future. There is a need for an interdisciplinary effort, including data scientists, physicians, patient advocates, regulatory agencies, and health insurance organizations. Partially unrealistic expectations and concerns about data science-based solutions need to be better managed. In parallel, computational methods must advance more to provide direct benefit to clinical practice.

248 citations

Journal ArticleDOI
TL;DR: A statistical mechanics interpretation of these results that distinguishes between functionally distinct cellular “macrostates” and functionally similar molecular “microstates” is suggested and a model of stem cell differentiation as a non-Markov stochastic process is proposed.
Abstract: Summary Pluripotent stem cells can self-renew in culture and differentiate along all somatic lineages in vivo . While much is known about the molecular basis of pluripotency, the mechanisms of differentiation remain unclear. Here, we profile individual mouse embryonic stem cells as they progress along the neuronal lineage. We observe that cells pass from the pluripotent state to the neuronal state via an intermediate epiblast-like state. However, analysis of the rate at which cells enter and exit these observed cell states using a hidden Markov model indicates the presence of a chain of unobserved molecular states that each cell transits through stochastically in sequence. This chain of hidden states allows individual cells to record their position on the differentiation trajectory, thereby encoding a simple form of cellular memory. We suggest a statistical mechanics interpretation of these results that distinguishes between functionally distinct cellular "macrostates" and functionally similar molecular "microstates" and propose a model of stem cell differentiation as a non-Markov stochastic process.

165 citations

Journal ArticleDOI
TL;DR: It is found that early molecular changes subsequent to Nanog loss are stochastic and reversible, and exogenous regulation of Nanog-dependent feedback control mechanisms produced a more homogeneous ES cell population.
Abstract: A number of key regulators of mouse embryonic stem (ES) cell identity, including the transcription factor Nanog, show strong expression fluctuations at the single-cell level. The molecular basis for these fluctuations is unknown. Here we used a genetic complementation strategy to investigate expression changes during transient periods of Nanog downregulation. Employing an integrated approach that includes high-throughput single-cell transcriptional profiling and mathematical modelling, we found that early molecular changes subsequent to Nanog loss are stochastic and reversible. However, analysis also revealed that Nanog loss severely compromises the self-sustaining feedback structure of the ES cell regulatory network. Consequently, these nascent changes soon become consolidated to committed fate decisions in the prolonged absence of Nanog. Consistent with this, we found that exogenous regulation of Nanog-dependent feedback control mechanisms produced a more homogeneous ES cell population. Taken together our results indicate that Nanog-dependent feedback loops have a role in controlling both ES cell fate decisions and population variability.

165 citations

Journal ArticleDOI
19 Oct 2011-Nature
TL;DR: Application of the methodology to gene regulatory networks suggests that roughly 80% of all nodes must be controlled to drive such a network, which seems to contradict recent empirical findings in the cellular reprogramming field.
Abstract: Arising from Y. Liu, J. Slotine & A. Barabasi , 167–173 (2011)10.1038/nature10011 ; Liu et al. reply Liu, Slotine and Barabasi1 identify subsets U of nodes in complex networks, which are required to exert full control of these networks. Control in this context means that for each possible state S of the network there exist inputs for all nodes in U, which are sufficient to force the network to state S1. Application of the methodology to gene regulatory networks suggests that roughly 80% of all nodes must be controlled to drive such a network. This seems to contradict recent empirical findings2,3,4,5,6 in the cellular reprogramming field.

115 citations

Journal ArticleDOI
TL;DR: The aim of this work is to enable multiscale modeling in systems medicine by enabling multi-modelling in medicine and to demonstrate the power of data-driven approaches.
Abstract: CITATION: Wolkenhauer, O. et al. 2014. Enabling multiscale modeling in systems medicine. Genome Medicine, 6:21, doi:10.1186/gm538.

82 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2016
TL;DR: The modern applied statistics with s is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Abstract: Thank you very much for downloading modern applied statistics with s. As you may know, people have search hundreds times for their favorite readings like this modern applied statistics with s, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. modern applied statistics with s is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the modern applied statistics with s is universally compatible with any devices to read.

5,249 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

Journal ArticleDOI
21 May 2015-Cell
TL;DR: This work has developed a high-throughput droplet-microfluidic approach for barcoding the RNA from thousands of individual cells for subsequent analysis by next-generation sequencing, which shows a surprisingly low noise profile and is readily adaptable to other sequencing-based assays.

2,894 citations

01 Mar 2001
TL;DR: Using singular value decomposition in transforming genome-wide expression data from genes x arrays space to reduced diagonalized "eigengenes" x "eigenarrays" space gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype.
Abstract: ‡We describe the use of singular value decomposition in transforming genome-wide expression data from genes 3 arrays space to reduced diagonalized ‘‘eigengenes’’ 3 ‘‘eigenarrays’’ space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent noise or experimental artifacts enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively.

1,815 citations