Institution
Xi'an Jiaotong University
Education•Xi'an, China•
About: Xi'an Jiaotong University is a education organization based out in Xi'an, China. It is known for research contribution in the topics: Heat transfer & Dielectric. The organization has 85440 authors who have published 99682 publications receiving 1579683 citations. The organization is also known as: '''Xi'an Jiaotong University''' & Xi'an Jiao Tong University.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: It is proved that WNNP is equivalent to a standard quadratic programming problem with linear constrains, which facilitates solving the original problem with off-the-shelf convex optimization solvers and presents an automatic weight setting method, which greatly facilitates the practical implementation of WNNM.
Abstract: As a convex relaxation of the rank minimization model, the nuclear norm minimization (NNM) problem has been attracting significant research interest in recent years. The standard NNM regularizes each singular value equally, composing an easily calculated convex norm. However, this restricts its capability and flexibility in dealing with many practical problems, where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, which adaptively assigns weights on different singular values. As the key step of solving general WNNM models, the theoretical properties of the weighted nuclear norm proximal (WNNP) operator are investigated. Albeit nonconvex, we prove that WNNP is equivalent to a standard quadratic programming problem with linear constrains, which facilitates solving the original problem with off-the-shelf convex optimization solvers. In particular, when the weights are sorted in a non-descending order, its optimal solution can be easily obtained in closed-form. With WNNP, the solving strategies for multiple extensions of WNNM, including robust PCA and matrix completion, can be readily constructed under the alternating direction method of multipliers paradigm. Furthermore, inspired by the reweighted sparse coding scheme, we present an automatic weight setting method, which greatly facilitates the practical implementation of WNNM. The proposed WNNM methods achieve state-of-the-art performance in typical low level vision tasks, including image denoising, background subtraction and image inpainting.
608 citations
••
University of Washington1, University of Southern California2, Harvard University3, University of Michigan4, University of Groningen5, Max Planck Society6, University of Maryland, Baltimore7, Icahn School of Medicine at Mount Sinai8, Xi'an Jiaotong University9, University of Texas MD Anderson Cancer Center10, University of North Carolina at Charlotte11, Broad Institute12, European Bioinformatics Institute13, Yale University14, University of California, Davis15, University of Utah16, Pacific Biosciences17, University of California, San Diego18, Illumina19, Ludwig Institute for Cancer Research20, Ewha Womans University21, Drexel University22, University of Texas Health Science Center at Houston23, Washington University in St. Louis24, University of Malaya25, University of California, San Francisco26, University of British Columbia27, BC Cancer Agency28
TL;DR: A suite of long-read, short- read, strand-specific sequencing technologies, optical mapping, and variant discovery algorithms are applied to comprehensively analyze three trios to define the full spectrum of human genetic variation in a haplotype-resolved manner.
Abstract: The incomplete identification of structural variants (SVs) from whole-genome sequencing data limits studies of human genetic diversity and disease association. Here, we apply a suite of long-read, short-read, strand-specific sequencing technologies, optical mapping, and variant discovery algorithms to comprehensively analyze three trios to define the full spectrum of human genetic variation in a haplotype-resolved manner. We identify 818,054 indel variants (<50 bp) and 27,622 SVs (≥50 bp) per genome. We also discover 156 inversions per genome and 58 of the inversions intersect with the critical regions of recurrent microdeletion and microduplication syndromes. Taken together, our SV callsets represent a three to sevenfold increase in SV detection compared to most standard high-throughput sequencing studies, including those from the 1000 Genomes Project. The methods and the dataset presented serve as a gold standard for the scientific community allowing us to make recommendations for maximizing structural variation sensitivity for future genome sequencing studies.
606 citations
••
TL;DR: The results show that the proposed approach might produce better images with lower noise and more detailed structural features in the authors' selected cases, however, there is no proof that this is true for all kinds of structures.
Abstract: Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures.
603 citations
••
TL;DR: The results demonstrate that the FAST not only produces smaller subsets of features but also improves the performances of the four types of classifiers.
Abstract: Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm (FAST) is proposed and experimentally evaluated in this paper. The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent, the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree (MST) clustering method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Extensive experiments are carried out to compare FAST and several representative feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and FOCUS-SF, with respect to four types of well-known classifiers, namely, the probability-based Naive Bayes, the tree-based C4.5, the instance-based IB1, and the rule-based RIPPER before and after feature selection. The results, on 35 publicly available real-world high-dimensional image, microarray, and text data, demonstrate that the FAST not only produces smaller subsets of features but also improves the performances of the four types of classifiers.
594 citations
••
TL;DR: It is highlighted that improved understanding of the emission sources, physical/chemical processes during haze evolution, and interactions with meteorological/climatic changes are necessary to unravel the causes, mechanisms, and trends for haze pollution.
Abstract: Regional severe haze represents an enormous environmental problem in China, influencing air quality, human health, ecosystem, weather, and climate. These extremes are characterized by exceedingly high concentrations of fine particulate matter (smaller than 2.5 µm, or PM2.5) and occur with extensive temporal (on a daily, weekly, to monthly timescale) and spatial (over a million square kilometers) coverage. Although significant advances have been made in field measurements, model simulations, and laboratory experiments for fine PM over recent years, the causes for severe haze formation have not yet to be systematically/comprehensively evaluated. This review provides a synthetic synopsis of recent advances in understanding the fundamental mechanisms of severe haze formation in northern China, focusing on emission sources, chemical formation and transformation, and meteorological and climatic conditions. In particular, we highlight the synergetic effects from the interactions between anthropogenic emissions and atmospheric processes. Current challenges and future research directions to improve the understanding of severe haze pollution as well as plausible regulatory implications on a scientific basis are also discussed.
586 citations
Authors
Showing all 86109 results
Name | H-index | Papers | Citations |
---|---|---|---|
Feng Zhang | 172 | 1278 | 181865 |
Yang Yang | 164 | 2704 | 144071 |
Jian Yang | 142 | 1818 | 111166 |
Lei Zhang | 130 | 2312 | 86950 |
Yang Liu | 129 | 2506 | 122380 |
Jian Zhou | 128 | 3007 | 91402 |
Chao Zhang | 127 | 3119 | 84711 |
Bin Wang | 126 | 2226 | 74364 |
Xin Wang | 121 | 1503 | 64930 |
Bo Wang | 119 | 2905 | 84863 |
Xuan Zhang | 119 | 1530 | 65398 |
Jian Liu | 117 | 2090 | 73156 |
Andrey L. Rogach | 117 | 576 | 46820 |
Yadong Yin | 115 | 431 | 64401 |
Xin Li | 114 | 2778 | 71389 |