scispace - formally typeset
Search or ask a question
Author

Roxana Daneshjou

Bio: Roxana Daneshjou is an academic researcher from Stanford University. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 13, co-authored 34 publications receiving 1295 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Better than tarot cards or crystal balls, the authors show that intricate analyses of observational clinical data can improve physicians’ ability to predict the future—at least with respect to as yet uncharacterized adverse drug effects and interactions.
Abstract: Adverse drug events remain a leading cause of morbidity and mortality around the world. Many adverse events are not detected during clinical trials before a drug receives approval for use in the clinic. Fortunately, as part of postmarketing surveillance, regulatory agencies and other institutions maintain large collections of adverse event reports, and these databases present an opportunity to study drug effects from patient population data. However, confounding factors such as concomitant medications, patient demographics, patient medical histories, and reasons for prescribing a drug often are uncharacterized in spontaneous reporting systems, and these omissions can limit the use of quantitative signal detection methods used in the analysis of such data. Here, we present an adaptive data-driven approach for correcting these factors in cases for which the covariates are unknown or unmeasured and combine this approach with existing methods to improve analyses of drug effects using three test data sets. We also present a comprehensive database of drug effects (Offsides) and a database of drug-drug interaction side effects (Twosides). To demonstrate the biological use of these new resources, we used them to identify drug targets, predict drug indications, and discover drug class interactions. We then corroborated 47 ( P

640 citations

Journal ArticleDOI
TL;DR: This review outlines recent developments in sequencing technologies and genome analysis methods for application in personalized medicine and outlines new methods needed in four areas to realize the potential of personalized medicine.
Abstract: Motivation: Widespread availability of low-cost, full genome sequencing will introduce new challenges for bioinformatics. Results: This review outlines recent developments in sequencing technologies and genome analysis methods for application in personalized medicine. New methods are needed in four areas to realize the potential of personalized medicine: (i) processing large-scale robust genomic data; (ii) interpreting the functional effect and the impact of genomic variation; (iii) integrating systems data to relate complex genetic interactions with phenotypes; and (iv) translating these discoveries into medical practice. Contact: russ.altman@stanford.edu Supplementary information: Supplementary data are available at Bioinformatics online.

243 citations

Journal ArticleDOI
TL;DR: A novel CYP2C single nucleotide polymorphism exerts a clinically relevant effect on warfarin dose in African Americans, independent of CYP 2C9*2 and CYP9*3, and incorporation of this variant into pharmacogenetic dosing algorithms could improve warFarin dose prediction in this population.

233 citations

Journal ArticleDOI
TL;DR: A comprehensive overview of medical AI devices approved by the US Food and Drug Administration sheds new light on limitations of the evaluation process that can mask vulnerabilities of devices when they are deployed on patients as mentioned in this paper.
Abstract: A comprehensive overview of medical AI devices approved by the US Food and Drug Administration sheds new light on limitations of the evaluation process that can mask vulnerabilities of devices when they are deployed on patients.

157 citations

Journal ArticleDOI
TL;DR: Methods for discovering genetic factors in drug response, including genome-wide association studies (GWAS), expression analysis, and other methods such as chemoinformatics and natural language processing (NLP) are described.
Abstract: There is great variation in drug-response phenotypes, and a “one size fits all” paradigm for drug delivery is flawed. Pharmacogenomics is the study of how human genetic information impacts drug response, and it aims to improve efficacy and reduced side effects. In this article, we provide an overview of pharmacogenetics, including pharmacokinetics (PK), pharmacodynamics (PD), gene and pathway interactions, and off-target effects. We describe methods for discovering genetic factors in drug response, including genome-wide association studies (GWAS), expression analysis, and other methods such as chemoinformatics and natural language processing (NLP). We cover the practical applications of pharmacogenomics both in the pharmaceutical industry and in a clinical setting. In drug discovery, pharmacogenomics can be used to aid lead identification, anticipate adverse events, and assist in drug repurposing efforts. Moreover, pharmacogenomic discoveries show promise as important elements of physician decision support. Finally, we consider the ethical, regulatory, and reimbursement challenges that remain for the clinical implementation of pharmacogenomics.

58 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is suggested that deep learning approaches could be the vehicle for translating big biomedical data into improved human health and develop holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.
Abstract: Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.

1,573 citations

Journal ArticleDOI
TL;DR: The findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.
Abstract: Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name “deep patient”. We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.

1,155 citations

Journal ArticleDOI
30 Jan 2015-Science
TL;DR: This Review summarizes and draws connections between diverse streams of empirical research on privacy behavior: people’s uncertainty about the consequences of privacy-related behaviors and their own preferences over those consequences; the context-dependence of people's concern about privacy; and the degree to which privacy concerns are malleable—manipulable by commercial and governmental interests.
Abstract: This Review summarizes and draws connections between diverse streams of empirical research on privacy behavior. We use three themes to connect insights from social and behavioral sciences: people's uncertainty about the consequences of privacy-related behaviors and their own preferences over those consequences; the context-dependence of people's concern, or lack thereof, about privacy; and the degree to which privacy concerns are malleable—manipulable by commercial and governmental interests. Organizing our discussion by these themes, we offer observations concerning the role of public policy in the protection of privacy in the information age.

1,139 citations

Journal ArticleDOI
20 May 2014-eLife
TL;DR: Dixon, Patel, et al. as mentioned in this paper found that erastin is a very effective inhibitor of system xc− function and that it is over 1000 times more potent than the previously known best inhibitor, sulfasalazine.
Abstract: Sugars, fats, amino acids, and other nutrients cannot simply diffuse into the cell. Rather, they must be transported across the cell membrane by specific proteins that stretch from one side of the cell membrane to the other. One such ‘transporter’—system xc−—is of special interest. This transporter imports one molecule of cystine from outside the cell in exchange for one molecule of glutamate from inside the cell. Cystine, a variant of the amino acid cysteine, is essential for synthesizing new proteins and for preventing the accumulation of toxic species inside the cell. Not surprisingly, many cancer cells are dependent upon the transport activity of system xc− for growth and survival. Drugs that can inhibit system xc− could therefore be part of potential treatments for cancer and other diseases. Dixon, Patel, et al. have found that the compound erastin is a very effective inhibitor of system xc− function. Certain versions of erastin are over 1000 times more potent than the previously known best inhibitor of system xc−, sulfasalazine. Dixon, Patel et al. found that using erastin and sulfasalazine to inhibit system xc− in cancer cells grown in a petri dish results in an unusual type of iron-dependent cell death called ferroptosis. By inhibiting the uptake of cystine, erastin and other system xc− inhibitors interfere with the cellular machinery that folds proteins into their final, three-dimensional shape. The accumulation of these partially-folded proteins in the cell causes a specific kind of cellular stress that can be used as a readout, or biomarker, for the inhibition of system xc−. Such a biomarker will be essential for identifying cells in the body that have been exposed to agents that inhibit system xc− and that are undergoing ferroptosis. Unexpectedly, Dixon, Patel et al. also found that the FDA-approved anti-cancer drug sorafenib inhibits system xc−. Other drugs in the same class as sorafenib do not share this unusual property. Dixon, Patel, et al. synthesized variants of sorafenib and identified sites on the drug that are necessary for it to be able to interfere with system xc−. Alongside the erastin derivatives, these new molecules may help to develop new drugs that can inhibit this important transporter in a clinical setting.

1,137 citations

Journal ArticleDOI
TL;DR: Decagon is presented, an approach for modeling polypharmacy side effects that develops a new graph convolutional neural network for multirelational link prediction in multimodal networks and can predict the exact side effect, if any, through which a given drug combination manifests clinically.
Abstract: Motivation The use of drug combinations, termed polypharmacy, is common to treat patients with complex diseases or co-existing conditions However, a major consequence of polypharmacy is a much higher risk of adverse side effects for the patient Polypharmacy side effects emerge because of drug-drug interactions, in which activity of one drug may change, favorably or unfavorably, if taken with another drug The knowledge of drug interactions is often limited because these complex relationships are rare, and are usually not observed in relatively small clinical testing Discovering polypharmacy side effects thus remains an important challenge with significant implications for patient mortality and morbidity Results Here, we present Decagon, an approach for modeling polypharmacy side effects The approach constructs a multimodal graph of protein-protein interactions, drug-protein target interactions and the polypharmacy side effects, which are represented as drug-drug interactions, where each side effect is an edge of a different type Decagon is developed specifically to handle such multimodal graphs with a large number of edge types Our approach develops a new graph convolutional neural network for multirelational link prediction in multimodal networks Unlike approaches limited to predicting simple drug-drug interaction values, Decagon can predict the exact side effect, if any, through which a given drug combination manifests clinically Decagon accurately predicts polypharmacy side effects, outperforming baselines by up to 69% We find that it automatically learns representations of side effects indicative of co-occurrence of polypharmacy in patients Furthermore, Decagon models particularly well polypharmacy side effects that have a strong molecular basis, while on predominantly non-molecular side effects, it achieves good performance because of effective sharing of model parameters across edge types Decagon opens up opportunities to use large pharmacogenomic and patient population data to flag and prioritize polypharmacy side effects for follow-up analysis via formal pharmacological studies Availability and implementation Source code and preprocessed datasets are at: http://snapstanfordedu/decagon

850 citations