scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A transfer learning approach to drug resistance classification in mixed HIV dataset

TL;DR: In this article, a transfer learning approach was used to classify patients' response to failed treatments due to adverse drug reactions, where a soft computing model was pre-trained to cluster CD4+ counts and viral loads of treatment change episodes (TCEs) processed from two disparate sources: the Stanford HIV drug resistant database ( https://hivdb.stanford.edu ).
About: This article is published in Informatics in Medicine Unlocked.The article was published on 2021-01-01 and is currently open access. It has received 5 citations till now.
Citations
More filters
Posted ContentDOI
03 Oct 2021-medRxiv
TL;DR: Transfer learning is a form of machine learning where a pre-trained model trained on a specific task is reused as a starting point and tailored to another task in a different dataset.
Abstract: Background Transfer learning is a form of machine learning where a pre-trained model trained on a specific task is reused as a starting point and tailored to another task in a different dataset. While transfer learning has garnered considerable attention in medical image analysis, its use for clinical non-image data is not well studied. Therefore, the objective of this scoping review was to explore the use of transfer learning for non-image data in the clinical literature. Methods and Findings We systematically searched medical databases (PubMed, EMBASE, CINAHL) for peer-reviewed clinical studies that used transfer learning on human non-image data. We included 83 studies in the review. More than half of the studies (63%) were published within 12 months of the search. Transfer learning was most often applied to time series data (61%), followed by tabular data (18%), audio (12%) and text (8%). Thirty-three (40%) studies applied an image-based model to non-image data after transforming data into images (e.g. spectrograms). Twenty-nine (35%) studies did not have any authors with a health-related affiliation. Many studies used publicly available datasets (66%) and models (49%), but fewer shared their code (27%). Conclusions In this scoping review, we have described current trends in the use of transfer learning for non-image data in the clinical literature. We found that the use of transfer learning has grown rapidly within the last few years. We have identified studies and demonstrated the potential of transfer learning in clinical research in a wide range of medical specialties. More interdisciplinary collaborations and the wider adaption of reproducible research principles are needed to increase the impact of transfer learning in clinical research.

10 citations

Journal ArticleDOI
TL;DR: Transfer learning is a form of machine learning where a pre-trained model trained on a specific task is reused as a starting point and tailored to another task in a different dataset as mentioned in this paper .
Abstract: Background Transfer learning is a form of machine learning where a pre-trained model trained on a specific task is reused as a starting point and tailored to another task in a different dataset. While transfer learning has garnered considerable attention in medical image analysis, its use for clinical non-image data is not well studied. Therefore, the objective of this scoping review was to explore the use of transfer learning for non-image data in the clinical literature. Methods and findings We systematically searched medical databases (PubMed, EMBASE, CINAHL) for peer-reviewed clinical studies that used transfer learning on human non-image data. We included 83 studies in the review. More than half of the studies (63%) were published within 12 months of the search. Transfer learning was most often applied to time series data (61%), followed by tabular data (18%), audio (12%) and text (8%). Thirty-three (40%) studies applied an image-based model to non-image data after transforming data into images (e.g. spectrograms). Twenty-nine (35%) studies did not have any authors with a health-related affiliation. Many studies used publicly available datasets (66%) and models (49%), but fewer shared their code (27%). Conclusions In this scoping review, we have described current trends in the use of transfer learning for non-image data in the clinical literature. We found that the use of transfer learning has grown rapidly within the last few years. We have identified studies and demonstrated the potential of transfer learning in clinical research in a wide range of medical specialties. More interdisciplinary collaborations and the wider adaption of reproducible research principles are needed to increase the impact of transfer learning in clinical research.

9 citations

Journal ArticleDOI
TL;DR: In this article , a geometric deep learning (GDL) approach is proposed to predict drug resistance to HIV, and virus-drug interaction, and the obtained results show that the proposed GDL method outperforms existing methods in predicting drug resistance in HIV with 93.3% accuracy.

3 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provided a control dataset of processed prognostic indicators for analysing drug resistance in patients on antiretroviral therapy (ART), which was locally sourced from health facilities in Akwa Ibom State of Nigeria, West Africa and contains 14 attributes with 1506 unique records filtered from 3168 individual treatment change episodes.
Journal ArticleDOI
TL;DR: In this paper , transfer learning based on VGG19 and ResNet neural networks were used for feature extraction from CT scans for TB diagnosis from CT images, and the best performing model, for the classification of multi-drug resistance, was a three-channel model which used VGG-19 and a cascade of convolutional and dense layers for classification with accuracy of 74.13% and AUC equals to 64.2%.
References
More filters
Journal ArticleDOI
TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.

18,616 citations

Journal ArticleDOI
TL;DR: The prevalence and incidence rates, the established environmental risk factors, and the protective factors are discussed, and genetic variants predisposing to disease are reviewed.
Abstract: The global prevalence of dementia has been estimated to be as high as 24 million, and is predicted to double every 20 years until at least 2040. As the population worldwide continues to age, the number of individuals at risk will also increase, particularly among the very old. Alzheimer disease is the leading cause of dementia beginning with impaired memory. The neuropathological hallmarks of Alzheimer disease include diffuse and neuritic extracellular amyloid plaques in brain that are frequently surrounded by dystrophic neurites and intraneuronal neurofibrillary tangles. The etiology of Alzheimer disease remains unclear, but it is likely to be the result of both genetic and environmental factors. In this review we discuss the prevalence and incidence rates, the established environmental risk factors, and the protective factors, and briefly review genetic variants predisposing to disease.

1,135 citations

Journal ArticleDOI
TL;DR: The basic principles of antiretroviral drug therapy, the mode of drug action, and the factors leading to treatment failure are reviewed (i.e., drug resistance).
Abstract: The most significant advance in the medical management of HIV-1 infection has been the treatment of patients with antiviral drugs, which can suppress HIV-1 replication to undetectable levels. The discovery of HIV-1 as the causative agent of AIDS together with an ever-increasing understanding of the virus replication cycle have been instrumental in this effort by providing researchers with the knowledge and tools required to prosecute drug discovery efforts focused on targeted inhibition with specific pharmacological agents. To date, an arsenal of 24 Food and Drug Administration (FDA)-approved drugs are available for treatment of HIV-1 infections. These drugs are distributed into six distinct classes based on their molecular mechanism and resistance profiles: (1) nucleoside-analog reverse transcriptase inhibitors (NNRTIs), (2) non-nucleoside reverse transcriptase inhibitors (NNRTIs), (3) integrase inhibitors, (4) protease inhibitors (PIs), (5) fusion inhibitors, and (6) coreceptor antagonists. In this article, we will review the basic principles of antiretroviral drug therapy, the mode of drug action, and the factors leading to treatment failure (i.e., drug resistance).

751 citations

Journal ArticleDOI
TL;DR: It is concluded that transfer learning can improve current CADx methods while also providing standalone classifiers without large datasets, facilitating machine-learning methods in radiomics and precision medicine.
Abstract: Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text]]. Further, the performance of ensemble classifiers based on both types was significantly better than that of either classifier type alone ([Formula: see text] versus 0.81, [Formula: see text]). We conclude that transfer learning can improve current CADx methods while also providing standalone classifiers without large datasets, facilitating machine-learning methods in radiomics and precision medicine.

431 citations

Proceedings Article
07 Aug 2011
TL;DR: This paper proposes a heterogeneous transfer learning framework for knowledge transfer between text and images by enriching the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method.
Abstract: Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset.

347 citations