scispace - formally typeset
Search or ask a question
Author

Fei Wang

Bio: Fei Wang is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 107, co-authored 1824 publications receiving 53587 citations. Previous affiliations of Fei Wang include Florida International University & University of Cincinnati.


Papers
More filters
Journal ArticleDOI
TL;DR: It is suggested that deep learning approaches could be the vehicle for translating big biomedical data into improved human health and develop holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.
Abstract: Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.

1,573 citations

Journal ArticleDOI
TL;DR: In this paper, the α6/β4 heterodimer was found to play a significant role in directing polarity and tissue structure of mammary epithelial cells, suggesting the existence of intimate interactions between different integrin pathways as well as adherens junctions.
Abstract: In a recently developed human breast cancer model, treatment of tumor cells in a 3-dimensional culture with inhibitory β1-integrin antibody or its Fab fragments led to a striking morphological and functional reversion to a normal phenotype. A stimulatory β1-integrin antibody proved to be ineffective. The newly formed reverted acini re-assembled a basement membrane and re-established E-cadherin–catenin complexes, and re-organized their cytoskeletons. At the same time they downregulated cyclin D1, upregulated p21cip,waf-1, and stopped growing. Tumor cells treated with the same antibody and injected into nude mice had significantly reduced number and size of tumors in nude mice. The tissue distribution of other integrins was also normalized, suggesting the existence of intimate interactions between the different integrin pathways as well as adherens junctions. On the other hand, nonmalignant cells when treated with either α6 or β4 function altering antibodies continued to grow, and had disorganized colony morphologies resembling the untreated tumor colonies. This shows a significant role of the α6/β4 heterodimer in directing polarity and tissue structure. The observed phenotypes were reversible when the cells were disassociated and the antibodies removed. Our results illustrate that the extracellular matrix and its receptors dictate the phenotype of mammary epithelial cells, and thus in this model system the tissue phenotype is dominant over the cellular genotype.

1,329 citations

Journal ArticleDOI
27 Jul 2006-Nature
TL;DR: It is shown that electric fields, of a strength equal to those detected endogenously, direct cell migration during wound healing as a prime directional cue.
Abstract: Wound healing is essential for maintaining the integrity of multicellular organisms. In every species studied, disruption of an epithelial layer instantaneously generates endogenous electric fields, which have been proposed to be important in wound healing. The identity of signalling pathways that guide both cell migration to electric cues and electric-field-induced wound healing have not been elucidated at a genetic level. Here we show that electric fields, of a strength equal to those detected endogenously, direct cell migration during wound healing as a prime directional cue. Manipulation of endogenous wound electric fields affects wound healing in vivo. Electric stimulation triggers activation of Src and inositol-phospholipid signalling, which polarizes in the direction of cell migration. Notably, genetic disruption of phosphatidylinositol-3-OH kinase-gamma (PI(3)Kgamma) decreases electric-field-induced signalling and abolishes directed movements of healing epithelium in response to electric signals. Deletion of the tumour suppressor phosphatase and tensin homolog (PTEN) enhances signalling and electrotactic responses. These data identify genes essential for electrical-signal-induced wound healing and show that PI(3)Kgamma and PTEN control electrotaxis.

871 citations

Journal ArticleDOI
TL;DR: A novel graph-based semi supervised learning approach is proposed based on a linear neighborhood model, which assumes that each data point can be linearly reconstructed from its neighborhood, and can propagate the labels from the labeled points to the whole data set using these linear neighborhoods with sufficient smoothness.
Abstract: In many practical data mining applications such as text classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi supervised learning algorithms have aroused considerable interests from the data mining and machine learning fields. In recent years, graph-based semi supervised learning has been becoming one of the most active research areas in the semi supervised learning community. In this paper, a novel graph-based semi supervised learning approach is proposed based on a linear neighborhood model, which assumes that each data point can be linearly reconstructed from its neighborhood. Our algorithm, named linear neighborhood propagation (LNP), can propagate the labels from the labeled points to the whole data set using these linear neighborhoods with sufficient smoothness. A theoretical analysis of the properties of LNP is presented in this paper. Furthermore, we also derive an easy way to extend LNP to out-of-sample data. Promising experimental results are presented for synthetic data, digit, and text classification tasks.

720 citations


Cited by
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

01 Jan 2016
TL;DR: The using multivariate statistics is universally compatible with any devices to read, allowing you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading using multivariate statistics. As you may know, people have look hundreds times for their favorite novels like this using multivariate statistics, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful bugs inside their laptop. using multivariate statistics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the using multivariate statistics is universally compatible with any devices to read.

14,604 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations