P
Pushpak Bhattacharyya
Researcher at Indian Institute of Technology Patna
Publications - 576
Citations - 8724
Pushpak Bhattacharyya is an academic researcher from Indian Institute of Technology Patna. The author has contributed to research in topics: Machine translation & WordNet. The author has an hindex of 38, co-authored 576 publications receiving 6465 citations. Previous affiliations of Pushpak Bhattacharyya include Xerox & IBM.
Papers
More filters
Proceedings ArticleDOI
Substring-based unsupervised transliteration with phonetic and contextual knowledge
TL;DR: An unsupervised approach for substring-based transliteration which incorporates two new sources of knowledge in the learning process: context by learning substring mappings, as opposed to single character mappings and phonetic features which capture cross-lingual character similarity via prior distributions.
Book ChapterDOI
Patient Data De-Identification: A Conditional Random-Field-Based Supervised Approach
TL;DR: Insight is provided into the de-identification task, its major challenges, techniques to address challenges, detailed analysis of the results and direction of future improvement, and a supervised machine learning technique for solving the problem of patient data deidentification.
Posted Content
Techniques for Jointly Extracting Entities and Relations: A Survey.
TL;DR: This survey surveys various techniques for jointly extracting entities and relations, and categorizes techniques based on the approach they adopt for joint extraction, i.e. whether they employ joint inference or joint modelling or both.
Proceedings ArticleDOI
A Retrofitting Model for Incorporating Semantic Relations into Word Embeddings
TL;DR: A novel retrofitting model that can leverage relational knowledge available in a knowledge resource to improve word embeddings is presented and large gains in performance over original distributional models as well as other retrofitting approaches on word similarity task and significant overall improvement on lexical entailment detection task are observed.
Posted Content
Related Tasks can Share! A Multi-task Framework for Affective language.
TL;DR: Evaluation and analysis suggest that joint-learning of the related tasks in a multi-task framework can outperform each of the individual tasks in the single-task frameworks.