scispace - formally typeset
Search or ask a question
Institution

Salesforce.com

About: Salesforce.com is a based out in . It is known for research contribution in the topics: User interface & Object (computer science). The organization has 2418 authors who have published 2775 publications receiving 63956 citations.


Papers
More filters
Patent
31 Mar 2005
TL;DR: In this article, a multi-tenant database system is described, where each organization can add or define custom fields for inclusion in a standard object. But each organization may also define custom objects including custom fields and indexing columns.
Abstract: Systems and methods for hosting variable schema data such as dynamic tables and columns in a fixed physical database schema. Standard objects, such as tables are provided for use by multiple tenants or organizations in a multi-tenant database system. Each organization may add or define custom fields for inclusion in a standard object. Custom fields for multiple tenants are stored in a single field within the object data structure, and this single field may contain different data types for each tenant. Indexing columns are also provided, wherein a tenant may designate a field for indexing. Data values for designated fields are copied to an index column, and each index column may include multiple data types. Each organization may also define custom objects including custom fields and indexing columns. Custom objects for multiple tenants are stored in a single custom object data structure. The primary key values for the single custom object table are globally unique, but also include an object-specific identifier which may be re-used among different entities.

702 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: An effective new model is proposed, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction that builds TACRED, a large supervised relation extraction dataset obtained via crowdsourcing and targeted towards TAC KBP relations.
Abstract: Organized relational knowledge in the form of “knowledge graphs” is important for many applications. However, the ability to populate knowledge bases with facts automatically extracted from documents has improved frustratingly slowly. This paper simultaneously addresses two issues that have held back prior work. We first propose an effective new model, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction. Then we build TACRED, a large (119,474 examples) supervised relation extraction dataset obtained via crowdsourcing and targeted towards TAC KBP relations. The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance. When the model trained on this new dataset replaces the previous relation extraction component of the best TAC KBP 2015 slot filling system, its F1 score increases markedly from 22.2% to 26.7%.

697 citations

Proceedings Article
04 Dec 2017
TL;DR: Adding context vectors to a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation to contextualize word vectors improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks.
Abstract: Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.

684 citations

Proceedings Article
19 Jun 2016
TL;DR: The new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the \babi-10k text question-answering dataset without supporting fact supervision.
Abstract: Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the bAbI-10k text question-answering dataset without supporting fact supervision.

658 citations

Posted Content
TL;DR: The authors used a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors and showed that adding these context vectors (CoVe) improved performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks.
Abstract: Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.

570 citations


Authors

Showing all 2418 results

NameH-indexPapersCitations
Philip S. Yu1481914107374
Michael R. Lyu8969633257
Silvio Savarese8938635975
Jiashi Feng7742621521
Richard Socher7727497703
Haibin Ling7238320858
Dragomir R. Radev6928820131
Irwin King6747619056
Steven C. H. Hoi6637515935
Xiaodan Liang6131814121
Caiming Xiong6033618037
Min-Yen Kan5225310207
Justin Yifu Lin4830213491
Hannaneh Hajishirzi421817802
Larry S. Davis401056960
Network Information
Related Institutions (5)
Facebook
10.9K papers, 570.1K citations

90% related

Google
39.8K papers, 2.1M citations

90% related

Microsoft
86.9K papers, 4.1M citations

89% related

Adobe Systems
8K papers, 214.7K citations

87% related

Carnegie Mellon University
104.3K papers, 5.9M citations

84% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20221
2021222
2020433
2019323
2018288
2017161