scispace - formally typeset
M

Mingyang Zhang

Researcher at Google

Publications -  36
Citations -  555

Mingyang Zhang is an academic researcher from Google. The author has contributed to research in topics: Computer science & Language model. The author has an hindex of 10, co-authored 24 publications receiving 355 citations. Previous affiliations of Mingyang Zhang include George Washington University & Nanyang Technological University.

Papers
More filters
Proceedings ArticleDOI

Semantic Text Matching for Long-Form Documents

TL;DR: This paper proposes a novel Siamese multi-depth attention-based hierarchical recurrent neural network (SMASH RNN) that learns the long-form semantics, and enables long- form document based semantic text matching.
Proceedings ArticleDOI

Situational Context for Ranking in Personal Search

TL;DR: This paper proposes two context-aware ranking models based on neural networks that significantly outperform the baselines which do not take context into account in personal search and evaluates the models using the click data collected from one of the world's largest personal search engines.
Proceedings ArticleDOI

Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching

TL;DR: The proposed SMITH model outperforms the previous state-of-the-art models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT and is able to increase maximum input text length from 512 to 2048.
Proceedings ArticleDOI

Performance Comparison of Flat and Cluster-Based Hierarchical Ad Hoc Routing with Entity and Group Mobility

TL;DR: It is observed that the effects of group mobility on routing protocols are significantly different from that of entity mobility.
Proceedings ArticleDOI

Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching

TL;DR: The proposed SMITH model outperforms the previous state-of-the-art Siamese matching models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT for long-form document matching, and increases the maximum input text length from 512 to 2048 when compared with BERT-based baseline methods.