scispace - formally typeset
J

Julian Martin Eisenschlos

Researcher at Google

Publications -  33
Citations -  782

Julian Martin Eisenschlos is an academic researcher from Google. The author has contributed to research in topics: Computer science & Language model. The author has an hindex of 7, co-authored 21 publications receiving 342 citations. Previous affiliations of Julian Martin Eisenschlos include Facebook.

Papers
More filters
Proceedings ArticleDOI

TaPas: Weakly Supervised Table Parsing via Pre-training

TL;DR: TaPas is presented, an approach to question answering over tables without generating logical forms that outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA and performing on par with the state of theart on WikiSQL and WikiTQ, but with a simpler model architecture.
Proceedings ArticleDOI

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning

TL;DR: This article propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language, and they also propose a zero-shot method using an existing pre-trained crosslingual model.
Proceedings ArticleDOI

Understanding tables with intermediate pre-training

TL;DR: This work adapts TAPAS (Herzig et al., 2020), a table-based BERT model, to recognize entailment, and creates a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning.
Proceedings ArticleDOI

Open Domain Question Answering over Tables via Dense Retrieval.

TL;DR: This work tackles open-domain QA over tables for the first time, and shows that retrieval can be improved by a retriever designed to handle tabular context, and presents an effective pre-training procedure for this retriever.
Posted Content

TAPAS: Weakly Supervised Table Parsing via Pre-training

TL;DR: TAPAS as mentioned in this paper extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end.