scispace - formally typeset
Search or ask a question
Author

Dora Demszky

Bio: Dora Demszky is an academic researcher. The author has contributed to research in topics: Sociotechnical system & Personalized learning. The author has an hindex of 1, co-authored 1 publications receiving 51 citations.

Papers
More filters
Posted Content
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ B. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie Chen, Kathleen Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel1, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Ahmad Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Yang Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang 
TL;DR: The authors provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e. g.g. model architectures, training procedures, data, systems, security, evaluation, theory) to their applications.
Abstract: AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.

76 citations

Book ChapterDOI
TL;DR: In this paper , the authors highlight the challenges and opportunities of AI-in-the-loop math tutoring and encourage discourse in the AIED community to develop human-AI hybrid tutoring/teaching systems.
Abstract: One of the primary obstacles to improving middle school math achievement is lack of equitable access to high-quality learning opportunities. Human delivery of high-dosage tutoring can bring significant learning gains, but students, particularly economically disadvantaged students, have limited access to well-trained tutors. Augmenting human tutor abilities through the use of artificial intelligence (AI) technology is one way to scale up access to tutors without compromising learning quality. This workshop aims to highlight the challenges and opportunities of AI-in-the-loop math tutoring and encourage discourse in the AIED community to develop human-AI hybrid tutoring and teaching systems. We invite papers that provide clearer understanding and support the progress of human and AI-assisted personalized learning technologies. The structure of this full-day hybrid workshop will include presentations of accepted papers, small or whole group discussion, and a panel discussion focusing on common themes related to research and application, key takeaways, and findings imperative to increasing middle school math learning.

Cited by
More filters
Posted Content
TL;DR: The authors showed that instruction tuning on a collection of tasks described via instructions substantially improves zero-shot performance on unseen tasks and even outperforms few-shot GPT-3 by a large margin on several NLP tasks verbalized via natural language instruction templates.
Abstract: This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially boosts zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 19 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of tasks and model scale are key components to the success of instruction tuning.

31 citations

Posted Content
TL;DR: In this paper, the authors introduce the concept of Chain LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step.
Abstract: Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by "unit-testing" sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.

21 citations

Journal ArticleDOI
TL;DR: This work uses a combination of a vector quantized generative adversarial network and contrastive language-image pre-training neural networks to generate images, which are translated into 3D architectures that are then 3D printed using fused deposition modeling into materials with varying rigidity.
Abstract: We describe a method to generate 3D architected materials based on mathematically parameterized human readable word input, offering a direct materialization of language. Our method uses a combination of a vector quantized generative adversarial network and contrastive language-image pre-training neural networks to generate images, which are translated into 3D architectures that are then 3D printed using fused deposition modeling into materials with varying rigidity. The novel materials are further analyzed in a metallic realization as an aluminum-based nano-architecture, using molecular dynamics modeling and thereby providing mechanistic insights into the physical behavior of the material under extreme compressive loading. This work offers a novel way to design, understand, and manufacture 3D architected materials designed from mathematically parameterized language input. Our work features, at its core, a generally applicable algorithm that transforms any 2D image data into hierarchical fully tileable, periodic architected materials. This method can have broader applications beyond language-based materials design and can render other avenues for the analysis and manufacturing of architected materials, including microstructure gradients through parametric modeling. As an emerging field, language-based design approaches can have a profound impact on end-to-end design environments and drive a new understanding of physical phenomena that intersect directly with human language and creativity. It may also be used to exploit information mined from diverse and complex databases and data sources.

17 citations

Posted Content
TL;DR: Weight-space ensembles as mentioned in this paper ensembling the weights of the zero-shot and fine-tuned models provide large accuracy improvements out-of-distribution, while matching or improving in-disparity accuracy.
Abstract: Large pre-trained models such as CLIP offer consistent accuracy across a range of data distributions when performing zero-shot inference (i.e., without fine-tuning on a specific dataset). Although existing fine-tuning approaches substantially improve accuracy in-distribution, they also reduce out-of-distribution robustness. We address this tension by introducing a simple and effective method for improving robustness: ensembling the weights of the zero-shot and fine-tuned models. Compared to standard fine-tuning, the resulting weight-space ensembles provide large accuracy improvements out-of-distribution, while matching or improving in-distribution accuracy. On ImageNet and five derived distribution shifts, weight-space ensembles improve out-of-distribution accuracy by 2 to 10 percentage points while increasing in-distribution accuracy by nearly 1 percentage point relative to standard fine-tuning. These improvements come at no additional computational cost during fine-tuning or inference.

15 citations

Journal ArticleDOI
Jerret Ross1, Chris Brown2
TL;DR: In this paper , a transformer-based model, MoLFormer, was proposed to learn the spatial relationships between atoms within a molecule using rotary positional embeddings, which can capture sufficient chemical and structural information to predict various distinct molecular properties.
Abstract: Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties. Large language models have recently emerged with extraordinary capabilities, and these methods can be applied to model other kinds of sequence, such as string representations of molecules. Ross and colleagues have created a transformer-based model, trained on a large dataset of molecules, which provides good results on property prediction tasks.

7 citations