scispace - formally typeset
B

Bodhisattwa Prasad Majumder

Researcher at University of California, San Diego

Publications -  35
Citations -  674

Bodhisattwa Prasad Majumder is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Dialog box & Persona. The author has an hindex of 10, co-authored 35 publications receiving 381 citations. Previous affiliations of Bodhisattwa Prasad Majumder include Kansas State University & Jadavpur University.

Papers
More filters
Posted Content

ReZero is All You Need: Fast Convergence at Large Depth.

TL;DR: This work shows that the simplest architecture change of gating each residual connection using a single zero-initialized parameter satisfies initial dynamical isometry and outperforms more complex approaches and is applied to language modeling and finds that it can easily train 120-layer Transformers.
Proceedings ArticleDOI

Representation Learning for Information Extraction from Form-like Documents

TL;DR: An extraction system that uses knowledge of the types of the target fields to generate extraction candidates and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document is proposed.
Proceedings ArticleDOI

An efficient iterative double auction for energy trading in microgrids

TL;DR: Simulation results indicate that the proposed iterative double auction can establish social welfare maximization, requiring only a reasonable amount of computational overhead.
Proceedings ArticleDOI

Generating Personalized Recipes from Historical User Preferences.

TL;DR: This work proposes a new task of personalized recipe generation to help users with culinary preferences: expanding a name and incomplete ingredient details into complete natural-text instructions aligned with the user’s historical preferences.
Proceedings ArticleDOI

Improving Neural Story Generation by Targeted Common Sense Grounding.

TL;DR: A simple multi-task learning scheme to achieve quantitatively better common sense reasoning in language models by leveraging auxiliary training signals from datasets designed to provide common sense grounding.