scispace - formally typeset
B

Bailin Wang

Researcher at University of Edinburgh

Publications -  30
Citations -  1167

Bailin Wang is an academic researcher from University of Edinburgh. The author has contributed to research in topics: Parsing & Computer science. The author has an hindex of 10, co-authored 22 publications receiving 483 citations. Previous affiliations of Bailin Wang include University of Massachusetts Amherst.

Papers
More filters
Proceedings ArticleDOI

RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers

TL;DR: This work presents a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder and achieves the new state-of-the-art performance on the Spider leaderboard.
Proceedings Article

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

TL;DR: The U NIFIED SKG framework is proposed, which unifies 21 SKG tasks into a text-to-text format, aiming to promote systematic SKG research, instead of being exclu-sive to a single task, domain, or dataset.
Posted Content

GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing

TL;DR: GraPPa is an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data and significantly outperforms RoBERTa-large as the feature representation layers and establishes new state-of-the-art results on all of them.
Proceedings ArticleDOI

Neural Segmental Hypergraphs for Overlapping Mention Recognition

TL;DR: This work proposes a novel segmental hypergraph representation to model overlapping entity mentions that are prevalent in many practical datasets and shows that the model built on top of such a new representation is able to capture features and interactions that cannot be captured by previous models while maintaining a low time complexity for inference.
Proceedings Article

Learning Latent Opinions for Aspect-level Sentiment Classification

TL;DR: A segmentation attention based LSTM model which can effectively capture the structural dependencies between the target and the sentiment expressions with a linear-chain conditional random field (CRF) layer is proposed.