scispace - formally typeset
Search or ask a question
Institution

Shanghai Jiao Tong University

EducationShanghai, Shanghai, China
About: Shanghai Jiao Tong University is a education organization based out in Shanghai, Shanghai, China. It is known for research contribution in the topics: Population & Cancer. The organization has 157524 authors who have published 184620 publications receiving 3451038 citations. The organization is also known as: Shanghai Communications University & Shanghai Jiaotong University.


Papers
More filters
Journal ArticleDOI
TL;DR: The first two China Youth Scholars Symposiums on Mg Alloys Research had been held at Harbin (2015) and Chongqing (2016) China, respectively, aiming to boost far-reaching initiatives for development of new Mg-based materials to satisfy the requirements for a broad range of industrial employments as mentioned in this paper.

488 citations

Journal ArticleDOI
TL;DR: It is shown that PKM2 is acetylated on lysine 305 and that this acetylation is stimulated by high glucose concentration, and the results reveal an acetylations regulation of pyruvate kinase and the link between lysin acetylATION and CMA.

488 citations

Journal ArticleDOI
TL;DR: An improved tabu search (ITS) algorithm for loss-minimization reconfiguration in large-scale distribution systems and numerical results well demonstrate the validity and effectiveness of the proposed ITS algorithm.

485 citations

Proceedings ArticleDOI
14 Sep 2014
TL;DR: Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis.
Abstract: Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.

485 citations

Journal ArticleDOI
TL;DR: The method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries, which outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
Abstract: CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

484 citations


Authors

Showing all 158621 results

NameH-indexPapersCitations
Meir J. Stampfer2771414283776
Richard A. Flavell2311328205119
Jie Zhang1784857221720
Yang Yang1712644153049
Lei Jiang1702244135205
Gang Chen1673372149819
Thomas S. Huang1461299101564
Barbara J. Sahakian14561269190
Jean-Laurent Casanova14484276173
Kuo-Chen Chou14348757711
Weihong Tan14089267151
Xin Wu1391865109083
David Y. Graham138104780886
Bin Liu138218187085
Jun Chen136185677368
Network Information
Related Institutions (5)
Zhejiang University
183.2K papers, 3.4M citations

97% related

Fudan University
117.9K papers, 2.6M citations

96% related

Peking University
181K papers, 4.1M citations

95% related

National University of Singapore
165.4K papers, 5.4M citations

93% related

Tsinghua University
200.5K papers, 4.5M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023415
20222,315
202120,873
202019,462
201916,699
201814,250