scispace - formally typeset
S

Shachi Dave

Researcher at Indian Institutes of Technology

Publications -  9
Citations -  259

Shachi Dave is an academic researcher from Indian Institutes of Technology. The author has contributed to research in topics: Computer science & Interlingua. The author has an hindex of 2, co-authored 2 publications receiving 159 citations.

Papers
More filters
Journal ArticleDOI

Interlingua-based English–Hindi Machine Translation and Language Divergence

TL;DR: The work presented here is the only one to the authors' knowledge that describes language divergence phenomena in the framework of computational linguistics through a South Asian language.
Journal ArticleDOI

PaLM 2 Technical Report

Rohan Anil, +121 more
- 17 May 2023 - 
TL;DR: The PaLM 2 model as mentioned in this paper is a Transformer-based model trained using a mixture of objectives, which has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM.
Journal ArticleDOI

Knowledge Extraction from Hindi Text

TL;DR: A unique approach to Knowledge Extraction from Hindi text which preserves the predicate till the end is presented which produces a semantic net like structure expressed by means of Universal Networking Language (UNL)—a recently proposed interlingua.
Journal ArticleDOI

Re-contextualizing Fairness in NLP: The Case of India

TL;DR: In this article , the authors focus on NLP fairness in the context of India and build resources for fairness evaluation in the Indian context and use them to demonstrate prediction biases along some of the axes.
Proceedings ArticleDOI

Bootstrapping Multilingual Semantic Parsers using Large Language Models

TL;DR: This work considers the task of multilingual semantic parsing, and demonstrates the effec-tiveness and theexibility offered by large language models (LLMs) for translating English datasets into several languages via few-shot prompting, and provides a comprehensive study of the key design choices that enable effective data translation via prompted LLMs.