scispace - formally typeset
Search or ask a question
Author

Jiaxi Ying

Other affiliations: Xiamen University
Bio: Jiaxi Ying is an academic researcher from Hong Kong University of Science and Technology. The author has contributed to research in topics: Graphical model & Computer science. The author has an hindex of 7, co-authored 20 publications receiving 248 citations. Previous affiliations of Jiaxi Ying include Xiamen University.

Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results on simulated and real magnetic resonance spectroscopy data show that the proposed approach can successfully recover full signals from very limited samples and is robust to the estimated tensor rank.
Abstract: Signals are generally modeled as a superposition of exponential functions in spectroscopy of chemistry, biology, and medical imaging. For fast data acquisition or other inevitable reasons, however, only a small amount of samples may be acquired, and thus, how to recover the full signal becomes an active research topic, but existing approaches cannot efficiently recover $N$ -dimensional exponential signals with $N\geq 3$ . In this paper, we study the problem of recovering $N$ -dimensional (particularly $N\geq 3$ ) exponential signals from partial observations, and formulate this problem as a low-rank tensor completion problem with exponential factor vectors. The full signal is reconstructed by simultaneously exploiting the CANDECOMP/PARAFAC tensor structure and the exponential structure of the associated factor vectors. The latter is promoted by minimizing an objective function involving the nuclear norm of Hankel matrices. Experimental results on simulated and real magnetic resonance spectroscopy data show that the proposed approach can successfully recover full signals from very limited samples and is robust to the estimated tensor rank.

94 citations

Journal ArticleDOI
TL;DR: The Vandermonde structure of the Hankel matrix formed by the exponential signal is exploited and formulate signal recovery asHankel matrix completion with VandermondE factorization (HVaF), which is validated on biological magnetic resonance spectroscopy data.
Abstract: Many signals are modeled as a superposition of exponential functions in spectroscopy of chemistry, biology, and medical imaging. This paper studies the problem of recovering exponential signals from a random subset of samples. We exploit the Vandermonde structure of the Hankel matrix formed by the exponential signal and formulate signal recovery as Hankel matrix completion with Vandermonde factorization (HVaF). A numerical algorithm is developed to solve the proposed model and its sequence convergence is analyzed theoretically. Experiments on synthetic data demonstrate that HVaF succeeds over a wider regime than the state-of-the-art nuclear-norm-minimization-based Hankel matrix completion method, while it has a less restriction on frequency separation than the state-of-the-art atomic norm minimization and fast iterative hard thresholding methods. The effectiveness of HVaF is further validated on biological magnetic resonance spectroscopy data.

54 citations

Journal Article
TL;DR: In this article, a unified graph learning framework is proposed, which integrates Gaussian graphical models and spectral graph theory, to learn a graph with a specific structure for interpretability and identification of the relationships among data.
Abstract: Graph learning from data represents a canonical problem that has received substantial attention in the literature. However, insufficient work has been done in incorporating prior structural knowledge onto the learning of underlying graphical models from data. Learning a graph with a specific structure is essential for interpretability and identification of the relationships among data. Useful structured graphs include the multi-component graph, bipartite graph, connected graph, sparse graph, and regular graph. In general, structured graph learning is an NP-hard combinatorial problem, therefore, designing a general tractable optimization method is extremely challenging. In this paper, we introduce a unified graph learning framework lying at the integration of Gaussian graphical models and spectral graph theory. To impose a particular structure on a graph, we first show how to formulate the combinatorial constraints as an analytical property of the graph matrix. Then we develop an optimization framework that leverages graph learning with specific structures via spectral constraints on graph matrices. The proposed algorithms are provably convergent, computationally efficient, and practically amenable for numerous graph-based tasks. Extensive numerical experiments with both synthetic and real data sets illustrate the effectiveness of the proposed algorithms. The code for all the simulations is made available as an open source repository.

40 citations

Posted Content
TL;DR: This paper introduces a unified graph learning framework lying at the integration of Gaussian graphical models and spectral graph theory, and develops an optimization framework that leverages graph learning with specific structures via spectral constraints on graph matrices.
Abstract: Graph learning from data represents a canonical problem that has received substantial attention in the literature. However, insufficient work has been done in incorporating prior structural knowledge onto the learning of underlying graphical models from data. Learning a graph with a specific structure is essential for interpretability and identification of the relationships among data. Useful structured graphs include the multi-component graph, bipartite graph, connected graph, sparse graph, and regular graph. In general, structured graph learning is an NP-hard combinatorial problem, therefore, designing a general tractable optimization method is extremely challenging. In this paper, we introduce a unified graph learning framework lying at the integration of Gaussian graphical models and spectral graph theory. To impose a particular structure on a graph, we first show how to formulate the combinatorial constraints as an analytical property of the graph matrix. Then we develop an optimization framework that leverages graph learning with specific structures via spectral constraints on graph matrices. The proposed algorithms are provably convergent, computationally efficient, and practically amenable for numerous graph-based tasks. Extensive numerical experiments with both synthetic and real data sets illustrate the effectiveness of the proposed algorithms. The code for all the simulations is made available as an open source repository.

39 citations

Journal ArticleDOI
TL;DR: The proposed method has been shown to reconstruct high quality MRS spectra from non-uniformly sampled data in the hybrid time and frequency plane, and outperforms the state-of-the-art compressed sensing approach on recovering low-intensity spectral peaks and robustness to different sampling patterns.
Abstract: G oal: The two dimensional magnetic resonance spectroscopy (MRS) possesses many important applications in bioengineering but suffers from long acquisition duration. Non-uniform sampling has been applied to the spatiotemporally encoded ultrafast MRS, but results in missing data in the hybrid time and frequency plane. An approach is proposed to recover this missing signal, of which enables high quality spectrum reconstruction. M ethods: The natural exponential characteristic of MRS is exploited to recover the hybrid time and frequency signal. The reconstruction issue is formulated as a low rank enhanced Hankel matrix completion problem and is solved by a fast numerical algorithm. R esults: Experiments on synthetic and real MRS data show that the proposed method provides faithful spectrum reconstruction, and outperforms the state-of-the-art compressed sensing approach on recovering low-intensity spectral peaks and robustness to different sampling patterns. C onclusion: The exponential signal property serves as an useful tool to model the time-domain MRS signals and even allows missing data recovery. The proposed method has been shown to reconstruct high quality MRS spectra from non-uniformly sampled data in the hybrid time and frequency plane. S ignificance: Low-intensity signal reconstruction is generally challenging in biological MRS and we provide a solution to this problem. The proposed method may be extended to recover signals that generally can be modeled as a sum of exponential functions in biomedical engineering applications, e.g., signal enhancement, feature extraction, and fast sampling.

32 citations


Cited by
More filters
Journal Article
TL;DR: In this article, a bipartite graph based data clustering method is proposed, where terms and documents are simultaneously grouped into semantically meaningful co-categories and subject descriptors.
Abstract: Bipartite Graph Partitioning and Data Clustering* Hongyuan Zha Xiaofeng He Dept. of Comp. Sci. & Eng. Penn State Univ. State College, PA 16802 {zha,xhe}@cse.psu.edu Chris Ding Horst Simon NERSC Division Berkeley National Lab. Berkeley, CA 94720 {chqding,hdsimon} Qlbl. gov Ming Gu Dept. of Math. U.C. Berkeley Berkeley, CA 94720 mgu@math.berkeley.edu ABSTRACT M a n y data types arising from data mining applications can be modeled as bipartite graphs, examples include terms and documents in a text corpus, customers and purchasing items in market basket analysis and reviewers and movies in a movie recommender system. In this paper, we propose a new data clustering method based on partitioning the underlying bipartite graph. The partition is constructed by minimizing a normalized sum of edge weights between unmatched pairs of vertices of the bipartite graph. We show that an approxi­ mate solution to the minimization problem can be obtained by computing a partial singular value decomposition ( S V D ) of the associated edge weight matrix of the bipartite graph. We point out the connection of our clustering algorithm to correspondence analysis used in multivariate analysis. We also briefly discuss the issue of assigning data objects to multiple clusters. In the experimental results, we apply our clustering algorithm to the problem of document clustering to illustrate its effectiveness and efficiency. 1. INTRODUCTION Cluster analysis is an important tool for exploratory data mining applications arising from many diverse disciplines. Informally, cluster analysis seeks to partition a given data set into compact clusters so that data objects within a clus­ ter are more similar than those in distinct clusters. The liter­ ature on cluster analysis is enormous including contributions from many research communities, (see [6, 9] for recent sur­ veys of some classical approaches.) M a n y traditional clus­ tering algorithms are based on the assumption that the given dataset consists of covariate information (or attributes) for each individual data object, and cluster analysis can be cast as a problem of grouping a set of n-dimensional vectors each representing a data object in the dataset. A familiar ex­ ample is document clustering using the vector space model [1]. Here each document is represented by an n-dimensional vector, and each coordinate of the vector corresponds to a term in a vocabulary of size n. This formulation leads to the so-called term-document matrix A = (oy) for the rep­ resentation of the collection of documents, where o y is the so-called term frequency, i.e., the number of times term i occurs in document j. In this vector space model terms and documents are treated asymmetrically with terms consid­ ered as the covariates or attributes of documents. It is also possible to treat both terms and documents as first-class citizens in a symmetric fashion, and consider a y as the fre­ quency of co-occurrence of term i and document j as is done, for example, in probabilistic latent semantic indexing [12]. In this paper, we follow this basic principle and propose a new approach to model terms and documents as vertices in a bipartite graph with edges of the graph indicating the co-occurrence of terms and documents. In addition we can optionally use edge weights to indicate the frequency of this co-occurrence. Cluster analysis for document collections in this context is based on a very intuitive notion: documents are grouped by topics, on one hand documents in a topic tend to more heavily use the same subset of terms which form a term cluster, and on the other hand a topic usually is characterized by a subset of terms and those documents heavily using those terms tend to be about that particular topic. It is this interplay of terms and documents which gives rise to what we call bi-clustering by which terms and documents are simultaneously grouped into semantically co- Categories and Subject Descriptors 11.3.3 [ I n f o r m a t i o n S e a r c h a n d R e t r i e v a l ] : Clustering; G.1.3 [ N u m e r i c a l L i n e a r A l g e b r a ] : Singular value de­ composition; G.2.2 [ G r a p h T h e o r y ] : G r a p h algorithms General Terms Algorithms, theory Keywords document clustering, bipartite graph, graph partitioning, spectral relaxation, singular value decomposition, correspon­ dence analysis *Part of this work was done while Xiaofeng He was a grad­ uate research assistant at N E R S C , Berkeley National Lab. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CIKM '01 November 5-10, 2001, Atlanta, Georgia. U S A Copyright 2001 A C M X - X X X X X - X X - X / X X / X X ...$5.00. O u r clustering algorithm computes an approximate global optimal solution while probabilistic latent semantic indexing relies on the E M algorithm and therefore might be prone to local m i n i m a even with the help of some annealing process. x

295 citations

Journal ArticleDOI
TL;DR: A modern overview of recent advances in tensor completion algorithms from the perspective of big data analytics characterized by diverse variety, large volume, and high velocity is provided.
Abstract: Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors. Due to the multidimensional character of tensors in describing complex datasets, tensor completion algorithms and their applications have received wide attention and achievement in areas like data mining, computer vision, signal processing, and neuroscience. In this survey, we provide a modern overview of recent advances in tensor completion algorithms from the perspective of big data analytics characterized by diverse variety, large volume, and high velocity. We characterize these advances from the following four perspectives: general tensor completion algorithms, tensor completion with auxiliary information (variety), scalable tensor completion algorithms (volume), and dynamic tensor completion algorithms (velocity). Further, we identify several tensor completion applications on real-world data-driven problems and present some common experimental frameworks popularized in the literature along with several available software repositories. Our goal is to summarize these popular methods and introduce them to researchers and practitioners for promoting future research and applications. We conclude with a discussion of key challenges and promising research directions in this community for future exploration.

145 citations

Journal ArticleDOI
TL;DR: In this paper, the signal of interest can be modeled as a linear superposition of translated or modulated versions of some template [e.g., a point spread function (PSF) or a Green's function] and the fundamental problem is to estimate the translation or modulation parameters (i.e., delays, locations, or Dopplers) from noisy measurements.
Abstract: At the core of many sensing and imaging applications, the signal of interest can be modeled as a linear superposition of translated or modulated versions of some template [e.g., a point spread function (PSF) or a Green's function] and the fundamental problem is to estimate the translation or modulation parameters (e.g., delays, locations, or Dopplers) from noisy measurements. This problem is centrally important to not only target localization in radar and sonar, channel estimation in wireless communications, and direction-of-arrival estimation in array signal processing, but also modern imaging modalities such as superresolution single-molecule fluorescence microscopy, nuclear magnetic resonance imaging, and spike localization in neural recordings, among others.

112 citations

Journal ArticleDOI
TL;DR: RMSE and SSIM had lower SROCC than six of the other IQMs included in the study (VIF, FSIM, NQM, GMSD, IWSSIM, and HDRVDP) and these results should be considered when choosing an IQM in future imaging studies.
Abstract: Image quality metrics (IQMs) such as root mean square error (RMSE) and structural similarity index (SSIM) are commonly used in the evaluation and optimization of accelerated magnetic resonance imaging (MRI) acquisition and reconstruction strategies. However, it is unknown how well these indices relate to a radiologist’s perception of diagnostic image quality. In this study, we compare the image quality scores of five radiologists with the RMSE, SSIM, and other potentially useful IQMs: peak signal to noise ratio (PSNR) multi-scale SSIM (MSSSIM), information-weighted SSIM (IWSSIM), gradient magnitude similarity deviation (GMSD), feature similarity index (FSIM), high dynamic range visible difference predictor (HDRVDP), noise quality metric (NQM), and visual information fidelity (VIF). The comparison uses a database of MR images of the brain and abdomen that have been retrospectively degraded by noise, blurring, undersampling, motion, and wavelet compression for a total of 414 degraded images. A total of 1017 subjective scores were assigned by five radiologists. IQM performance was measured via the Spearman rank order correlation coefficient (SROCC) and statistically significant differences in the residuals of the IQM scores and radiologists’ scores were tested. When considering SROCC calculated from combining scores from all radiologists across all image types, RMSE and SSIM had lower SROCC than six of the other IQMs included in the study (VIF, FSIM, NQM, GMSD, IWSSIM, and HDRVDP). In no case did SSIM have a higher SROCC or significantly smaller residuals than RMSE. These results should be considered when choosing an IQM in future imaging studies.

87 citations

Journal ArticleDOI
TL;DR: The proposed networks integrate multi-contrast information in a high-level feature space and optimize the imaging performance by minimizing a composite loss function, which includes mean-squared-error, adversarial loss, perceptual loss, and textural loss.
Abstract: Magnetic resonance imaging (MRI) is widely used for screening, diagnosis, image-guided therapy, and scientific research. A significant advantage of MRI over other imaging modalities such as computed tomography (CT) and nuclear imaging is that it clearly shows soft tissues in multi-contrasts. Compared with other medical image super-resolution methods that are in a single contrast, multi-contrast super-resolution studies can synergize multiple contrast images to achieve better super-resolution results. In this paper, we propose a one-level non-progressive neural network for low up-sampling multi-contrast super-resolution and a two-level progressive network for high up-sampling multi-contrast super-resolution. The proposed networks integrate multi-contrast information in a high-level feature space and optimize the imaging performance by minimizing a composite loss function, which includes mean-squared-error, adversarial loss, perceptual loss, and textural loss. Our experimental results demonstrate that 1) the proposed networks can produce MRI super-resolution images with good image quality and outperform other multi-contrast super-resolution methods in terms of structural similarity and peak signal-to-noise ratio; 2) combining multi-contrast information in a high-level feature space leads to a significantly improved result than a combination in the low-level pixel space; and 3) the progressive network produces a better super-resolution image quality than the non-progressive network, even if the original low-resolution images were highly down-sampled.

83 citations