P
Partha Pratim Talukdar
Researcher at Indian Institute of Science
Publications - 178
Citations - 7606
Partha Pratim Talukdar is an academic researcher from Indian Institute of Science. The author has contributed to research in topics: Graph (abstract data type) & Computer science. The author has an hindex of 42, co-authored 160 publications receiving 5455 citations. Previous affiliations of Partha Pratim Talukdar include Microsoft & University of Pennsylvania.
Papers
More filters
Journal ArticleDOI
Never-ending learning
Tom M. Mitchell,William W. Cohen,Estevam R. Hruschka,Partha Pratim Talukdar,Bishan Yang,Justin Betteridge,Andrew Carlson,Bhavana Dalvi,Matt Gardner,Bryan Kisiel,Jayant Krishnamurthy,Ni Lao,Kathryn Mazaitis,T. Mohamed,Ndapandula Nakashole,Emmanouil Antonios Platanios,Alan Ritter,Mehdi Samadi,Burr Settles,Richard Wang,Derry Tanti Wijaya,Abhinav Gupta,Xinlei Chen,Abulhair Saparov,M. Greaves,J. Welling +25 more
TL;DR: The Never-Ending Language Learner (NELL) as discussed by the authors is a case study of a machine learning system that learns to read the Web 24hrs/day since January 2010, and so far has acquired a knowledge base with 120mn diverse, confidence-weighted beliefs (e.g., servedWith(tea,biscuits), while learning thousands of interrelated functions that continually improve its reading competence over time.
Proceedings Article
Never-ending learning
Tom M. Mitchell,William W. Cohen,Estevam R. Hruschka,Partha Pratim Talukdar,Justin Betteridge,Andrew Carlson,Bhavana Dalvi,Matt Gardner,Bryan Kisiel,Jayant Krishnamurthy,Ni Lao,Kathryn Mazaitis,T. Mohamed,Ndapandula Nakashole,Emmanouil Antonios Platanios,Alan Ritter,Mehdi Samadi,Burr Settles,Richard Wang,Derry Tanti Wijaya,Abhinav Gupta,Xinlei Chen,Abulhair Saparov,M. Greaves,J. Welling +24 more
TL;DR: The Never-Ending Language Learner (NELL) as discussed by the authors is a machine learning system that learns to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidence-weighted beliefs (e.g., servedWith(tea, biscuits), while continuously improving its reading competence over time.
Proceedings Article
Composition-based Multi-Relational Graph Convolutional Networks
TL;DR: CompGCN as discussed by the authors proposes a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph, which leverages a variety of entity-relation composition operations from knowledge graph embedding techniques and scales with the number of relations.
Proceedings ArticleDOI
Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
TL;DR: EmbedKGZA is particularly effective in performing multi-hop KGQA over sparse KGs, and relaxes the requirement of answer selection from a pre-specified neighborhood, a sub-optimal constraint enforced by previous multi- Hop KG QA methods.
Proceedings ArticleDOI
HyTE: Hyperplane-based Temporally aware Knowledge Graph Embedding.
TL;DR: HyTE is a temporally aware KG embedding method which explicitly incorporates time in the entity-relation space by associating each timestamp with a corresponding hyperplane and not only performs KG inference using temporal guidance, but also predicts temporal scopes for relational facts with missing time annotations.