Institution
Shanghai Jiao Tong University
Education•Shanghai, Shanghai, China•
About: Shanghai Jiao Tong University is a education organization based out in Shanghai, Shanghai, China. It is known for research contribution in the topics: Population & Cancer. The organization has 157524 authors who have published 184620 publications receiving 3451038 citations. The organization is also known as: Shanghai Communications University & Shanghai Jiaotong University.
Topics: Population, Cancer, Microstructure, Cell growth, Metastasis
Papers published on a yearly basis
Papers
More filters
••
Harbin Institute of Technology1, Chinese Academy of Sciences2, Harbin Engineering University3, RMIT University4, Yanshan University5, Shanghai Jiao Tong University6, Chongqing University7, Chinese Ministry of Education8, Nanchang University9, Tianjin University10, Taiyuan University of Technology11, Jilin University12
TL;DR: The first two China Youth Scholars Symposiums on Mg Alloys Research had been held at Harbin (2015) and Chongqing (2016) China, respectively, aiming to boost far-reaching initiatives for development of new Mg-based materials to satisfy the requirements for a broad range of industrial employments as mentioned in this paper.
488 citations
••
TL;DR: It is shown that PKM2 is acetylated on lysine 305 and that this acetylation is stimulated by high glucose concentration, and the results reveal an acetylations regulation of pyruvate kinase and the link between lysin acetylATION and CMA.
488 citations
••
TL;DR: An improved tabu search (ITS) algorithm for loss-minimization reconfiguration in large-scale distribution systems and numerical results well demonstrate the validity and effectiveness of the proposed ITS algorithm.
485 citations
••
14 Sep 2014TL;DR: Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis.
Abstract: Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.
485 citations
••
TL;DR: The method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries, which outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
Abstract: CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
484 citations
Authors
Showing all 158621 results
Name | H-index | Papers | Citations |
---|---|---|---|
Meir J. Stampfer | 277 | 1414 | 283776 |
Richard A. Flavell | 231 | 1328 | 205119 |
Jie Zhang | 178 | 4857 | 221720 |
Yang Yang | 171 | 2644 | 153049 |
Lei Jiang | 170 | 2244 | 135205 |
Gang Chen | 167 | 3372 | 149819 |
Thomas S. Huang | 146 | 1299 | 101564 |
Barbara J. Sahakian | 145 | 612 | 69190 |
Jean-Laurent Casanova | 144 | 842 | 76173 |
Kuo-Chen Chou | 143 | 487 | 57711 |
Weihong Tan | 140 | 892 | 67151 |
Xin Wu | 139 | 1865 | 109083 |
David Y. Graham | 138 | 1047 | 80886 |
Bin Liu | 138 | 2181 | 87085 |
Jun Chen | 136 | 1856 | 77368 |