Institution
Alibaba Group
Company•Hangzhou, China•
About: Alibaba Group is a company organization based out in Hangzhou, China. It is known for research contribution in the topics: Computer science & Terminal (electronics). The organization has 6810 authors who have published 7389 publications receiving 55653 citations. The organization is also known as: Alibaba Group Holding Limited & Alibaba Group (Cayman Islands).
Topics: Computer science, Terminal (electronics), Graph (abstract data type), Node (networking), Deep learning
Papers published on a yearly basis
Papers
More filters
••
09 Aug 2021TL;DR: Li et al. as discussed by the authors proposed a Document U-shaped Network for document-level relation extraction, which leverages an encoder module to capture the context information of entities and a U-shape segmentation module over the image-style feature map to capture global interdependency among triples.
Abstract: Document-level relation extraction aims to extract relations among multiple entity pairs from a document. Previously proposed graph-based or transformer-based models utilize the entities independently, regardless of global information among relational triples. This paper approaches the problem by predicting an entity-level relation matrix to capture local and global information, parallel to the semantic segmentation task in computer vision. Herein, we propose a Document U-shaped Network for document-level relation extraction. Specifically, we leverage an encoder module to capture the context information of entities and a U-shaped segmentation module over the image-style feature map to capture global interdependency among triples. Experimental results show that our approach can obtain state-of-the-art performance on three benchmark datasets DocRED, CDR, and GDA.
31 citations
•
25 Aug 2016TL;DR: In this paper, a machine translation model is constructed to avoid a semantic deviation of a translated text from an original text, thereby achieving the effect of improving the quality of translation, and a pre-generated translation probability prediction model is used.
Abstract: A statistics-based machine translation method is disclosed. The method generates probabilities of translation from a sentence to be translated to candidate translated texts based on features of the candidate translated texts that affect the probabilities of translation and a pre-generated translation probability prediction model. The features that affect probabilities of translation include at least degrees of semantic similarity between the sentence to be translated and the candidate translated texts. A preset number of candidate translated texts with highly ranked probabilities of translation are selected to serve as translated texts of the sentence to be translated. The method is able to go deep into a semantic level of a natural language when a machine translation model is constructed to avoid a semantic deviation of a translated text from an original text, thereby achieving the effect of improving the quality of translation.
30 citations
••
TL;DR: This work proposes a uniform framework of differentiable TRG ($\partial$TRG) that can be applied to improve various TRG methods, in an automatic fashion, and demonstrates its power by simulating one- and two-dimensional quantum systems at finite temperature.
Abstract: Tensor renormalization group (TRG) constitutes an important methodology for accurate simulations of strongly correlated lattice models. Facilitated by the automatic differentiation technique widely used in deep learning, we propose a uniform framework of differentiable TRG ($\ensuremath{\partial}\mathrm{TRG}$) that can be applied to improve various TRG methods, in an automatic fashion. $\ensuremath{\partial}\mathrm{TRG}$ systematically extends the essential concept of second renormalization [Phys. Rev. Lett. 103, 160601 (2009)] where the tensor environment is computed recursively in the backward iteration. Given the forward TRG process, $\ensuremath{\partial}\mathrm{TRG}$ automatically finds the gradient of local tensors through backpropagation, with which one can deeply ``train'' the tensor networks. We benchmark $\ensuremath{\partial}\mathrm{TRG}$ in solving the square-lattice Ising model, and we demonstrate its power by simulating one- and two-dimensional quantum systems at finite temperature. The global optimization as well as GPU acceleration renders $\ensuremath{\partial}\mathrm{TRG}$ a highly efficient and accurate many-body computation approach.
30 citations
••
05 Jun 2018TL;DR: A partitioned embedding network to learn interpretable embeddings from clothing items and outfits recommended by the model are more desirable in comparison with the existing methods are demonstrated.
Abstract: Intelligent fashion outfit composition becomes more and more popular in these years. Some deep learning based approaches reveal competitive composition recently. However, the uninterpretable characteristic makes such deep learning based approach cannot meet the designers, businesses and consumers' urge to comprehend the importance of different attributes in an outfit composition. To realize interpretable and customized multi-item fashion outfit compositions, we propose a partitioned embedding network to learn interpretable embeddings from clothing items. The network consists of two vital components: attribute partition module and partition adversarial module. In the attribute partition module, multiple attribute labels are adopted to ensure that different parts of the overall embedding correspond to different attributes. In the partition adversarial module, adversarial operations are adopted to achieve the independence of different parts. With the interpretable and partitioned embedding, we then construct an outfit composition graph and an attribute matching map. Extensive experiments demonstrate that 1) the partitioned embedding have unmingled parts which corresponding to different attributes and 2) outfits recommended by our model are more desirable in comparison with the existing methods.
30 citations
•
TL;DR: The authors proposed an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multidirectional translation simultaneously.
Abstract: Unsupervised neural machine translation (UNMT) has recently achieved remarkable results with only large monolingual corpora in each language. However, the uncertainty of associating target with source sentences makes UNMT theoretically an ill-posed problem. This work investigates the possibility of utilizing images for disambiguation to improve the performance of UNMT. Our assumption is intuitively based on the invariant property of image, i.e., the description of the same visual content by different languages should be approximately similar. We propose an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multi-modal translation simultaneously. Through an alternate training between multi-modal and uni-modal, our inference model can translate with or without the image. On the widely used Multi30K dataset, the experimental results of our approach are significantly better than those of the text-only UNMT on the 2016 test dataset.
30 citations
Authors
Showing all 6829 results
Name | H-index | Papers | Citations |
---|---|---|---|
Philip S. Yu | 148 | 1914 | 107374 |
Lei Zhang | 130 | 2312 | 86950 |
Jian Xu | 94 | 1366 | 52057 |
Wei Chu | 80 | 670 | 28771 |
Le Song | 76 | 345 | 21382 |
Yuan Xie | 76 | 739 | 24155 |
Narendra Ahuja | 76 | 474 | 29517 |
Rong Jin | 75 | 449 | 19456 |
Beng Chin Ooi | 73 | 408 | 19174 |
Wotao Yin | 72 | 303 | 27233 |
Deng Cai | 70 | 326 | 24524 |
Xiaofei He | 70 | 260 | 28215 |
Irwin King | 67 | 476 | 19056 |
Gang Wang | 65 | 373 | 21579 |
Xiaodan Liang | 61 | 318 | 14121 |