scispace - formally typeset
Search or ask a question

Showing papers by "John Platt published in 2015"


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper used multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives, which serve as conditional inputs to a maximum-entropy language model.
Abstract: This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.

1,357 citations


Posted Content
TL;DR: This paper proposes a minimax conditional entropy principle to infer ground truth from noisy crowdsourced labels, and derives a unique probabilistic labeling model jointly parameterized by worker ability and item difficulty.
Abstract: There is a rapidly increasing interest in crowdsourcing for data labeling. By crowdsourcing, a large number of labels can be often quickly gathered at low cost. However, the labels provided by the crowdsourcing workers are usually not of high quality. In this paper, we propose a minimax conditional entropy principle to infer ground truth from noisy crowdsourced labels. Under this principle, we derive a unique probabilistic labeling model jointly parameterized by worker ability and item difficulty. We also propose an objective measurement principle, and show that our method is the only method which satisfies this objective measurement principle. We validate our method through a variety of real crowdsourcing datasets with binary, multiclass or ordinal labels.

78 citations


Patent
28 Aug 2015
TL;DR: In this paper, a deep multimodal similarity model was proposed to determine the relevance of the sentences based on similarity of text vectors generated for one or more sentences to an image vector generated for an image.
Abstract: Disclosed herein are technologies directed to discovering semantic similarities between images and text, which can include performing image search using a textual query, performing text search using an image as a query, and/or generating captions for images using a caption generator. A semantic similarity framework can include a caption generator and can be based on a deep multimodal similar model. The deep multimodal similarity model can receive sentences and determine the relevancy of the sentences based on similarity of text vectors generated for one or more sentences to an image vector generated for an image. The text vectors and the image vector can be mapped in a semantic space, and their relevance can be determined based at least in part on the mapping. The sentence associated with the text vector determined to be the most relevant can be output as a caption for the image.

51 citations