M
Ming-Wei Chang
Researcher at Google
Publications - 107
Citations - 65337
Ming-Wei Chang is an academic researcher from Google. The author has contributed to research in topics: Question answering & Parsing. The author has an hindex of 41, co-authored 98 publications receiving 36404 citations. Previous affiliations of Ming-Wei Chang include Microsoft & National Taiwan University.
Papers
More filters
Proceedings Article
From Entity Linking to Question Answering – Recent Progress on Semantic Grounding Tasks
TL;DR: This talk argues that carefully designed structured learning algorithms play a central role in entity linking and semantic parsing tasks and presents several new structured learning models for entity linking, which jointly detect mentions and disambiguate entities as well as capture non-textual information.
Journal ArticleDOI
Rethinking the Role of Token Retrieval in Multi-Vector Retrieval
Jinhyuk Lee,Zhuyun Dai,Sai Meher Karthik Duddu,Tao Lei,Iftekhar Naim,Ming-Wei Chang,Vincent Zhao +6 more
TL;DR: XTR as mentioned in this paper proposes a simple, yet novel, objective function that encourages the model to retrieve the most important document tokens first, which enables a newly designed scoring stage that is two-to-three orders of magnitude cheaper than that of ColBERT.
Patent
Retrieval-augmented language model pre-training and fine-tuning
TL;DR: In this paper, a neural-network-based textual knowledge retriever is trained along with a language model to retrieve helpful information from a large unlabeled corpus, rather than requiring all potentially relevant information to be stored implicitly in the parameters of the neural network.
Journal ArticleDOI
Fabrication of Highly Aligned Mwcnts Loaded Ngcs for Peripheral Nerve Regeneration Via a Novel One-Step Modified Electrohydrodynamic Printing Method
Proceedings ArticleDOI
QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations
TL;DR: The QUEST dataset as discussed by the authors ) is a dataset of 3357 natural language queries with implicit set operations that map to a set of entities corresponding to Wikipedia documents, which are then paraphrased and validated for naturalness and fluency by crowdworkers.