scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Generating Image Descriptions Using Semantic Similarities in the Output Space

TL;DR: This work extends the nearest-neighbour based generative phrase prediction model by considering inter-phrase semantic similarities, and re-formulates their objective function for parameter learning by penalizing each pair of phrases unevenly, in a manner similar to that in structured predictions.
Abstract: Automatically generating meaningful descriptions for images has recently emerged as an important area of research. In this direction, a nearest-neighbour based generative phrase prediction model (PPM) proposed by (Gupta et al. 2012) was shown to achieve state-of-the-art results on PASCAL sentence dataset, thanks to the simultaneous use of three different sources of information (i.e. visual clues, corpus statistics and available descriptions). However, they do not utilize semantic similarities among the phrases that might be helpful in relating semantically similar phrases during phrase relevance prediction. In this paper, we extend their model by considering inter-phrase semantic similarities. To compute similarity between two phrases, we consider similarities among their constituent words determined using WordNet. We also re-formulate their objective function for parameter learning by penalizing each pair of phrases unevenly, in a manner similar to that in structured predictions. Various automatic and human evaluations are performed to demonstrate the advantage of our "semantic phrase prediction model" (SPPM) over PPM.

Summary (3 min read)

1. Introduction

  • Along with the outburst of digital photographs on the Internet as well as in personal collections, there has been a parallel growth in the amount of images with relevant and more or less structured captions.
  • Thus, it would not be justifiable to treat the phrases “child” and “building” as equally absent.
  • First, the authors modify their model for predicting a phrase given an image.
  • This is a generic formulation and can be used/extended to other scenarios (such as metric learning in nearest-neighbour based methods [23]) where structured prediction needs to be performed using some nearest-neighbour based model.
  • Since their model relies on consideration of semantics among phrases during prediction, the authors call it “semantic phrase prediction model” (or SPPM).

3. Phrase Prediction Model

  • Given images and corresponding descriptions, a set of phrases Y is extracted using all the descriptions.
  • These phrases are restricted to five different types (considering “subject” and “” as equivalent for practical purposes): , (attribute, ), (, verb), (verb, prep, ), and (, prep, ).
  • The motivation behind using Google counts of phrases is to smooth their relative frequencies.
  • In order to learn the two sets of parameters (i.e., the weights wi’s and smoothing parameters μi’s), an objective function analogus to [23] is used.

4. Semantic Phrase Prediction Model

  • This results in penalizing semantically similar phrases (e.g. “person” vs. “man”).
  • Here the authors extend this model by considering semantic similarities among phrases.
  • To begin with, first the authors discuss how to compute semantic similarities.

4.1. Computing Semantic Similarities

  • The authors use WordNet based JCN simiarity measure [7] to compute semantic simiarity between the words a1 and a23.
  • WordNet is a large lexical database of English where words are interlinked in a hierarchy based on their semantic and lexical relationships.
  • It should be noted that the authors cannot compute semantic similarity between two prepositions using WordNet.

4.2. SPPM

  • Such a definition allows us to take into account the structure/semantic inter-dependence among phrases while predicting the relevance of a phrase.
  • Since the authors have modified the conditional probablity model for predicting a phrase given an image, they also need to update the objective function of equation 5 accordingly.
  • (11) The implication of Δ(·) is that if two phrases are semantically similar (e.g. “kid” and “child”), then penalty should be small and vice-versa.
  • This objective function looks similar to that used in [22] for metric learning in nearest neighbour scenario.
  • The major difference being that there the objective function is defined over samples, and penalty is based on semantic similarity between two samples (proportional to number of labels they share).

5.1. Experimental Details

  • The authors follow the same experimental set-up as in [6], and use UIUC PASCAL sentence dataset [19] for evaluation.
  • It has 1, 000 images and each image is described using 5 independent sentences.
  • These sentences are used to extract different types of phrases using “collapsed-ccprocesseddependencies” in the Stanford CoreNLP toolkit [1]4, giving 12, 865 distinct phrases.
  • All features other than GIST are also computed over three equal horizontal and vertical partitions [10].
  • While computing distance between two images (equation 1), L1 distance is used for colour, L2 for scene and texture, and χ2 for shape features.

5.2.2 Human Evaluation

  • Automatically describing an image is significantly different from machine translation or summary generation.
  • Approach BLEU-1 Score Rouge-1 Score BabyTalk [8].
  • (Higher score means better performance.) to rely just on automatic evaluation, and hence the need for human evaluation arises.
  • To measure grammatical correctness of generated description by giving the following ratings: (1) Terrible, (2) Mostly comprehensible with some errors, (3) Mostly perfect English sentence.
  • The authors also try to analyze the relative relevance of descriptions generated using PPM and SPPM.

5.3.1 Quantitative Results

  • Table 1 shows the results corresponding to automatic evaluations.
  • One important thing that the authors would like to point out is that it is not fully justifiable to directly compare their results with those of [8] and [24].
  • This is because the data (i.e., the fixed sets of objects, prepositions, verbs) that they use for composing new sentences is very much different from that of ours.
  • In [6], it was shown that when same data is used, PPM performs better than both of these.
  • In conclusion, their results are directly comparable only with PPM [6].

5.3.2 Qualitative Results

  • Human evaluation results corresponding to “Readability” and “Relevance” are shown in Table 2.
  • This is because SPPM takes into account semantic similarities among the phrases, which in turn results in generating more coherent descriptions than PPM.
  • For this, the authors show the top ten phrases of the type “object” A groom is posing with a scraggly person.
  • Predicted using the two models for an example image.
  • This is because in SPPM, the relevance (or presence) of a phrase also depends on the presence of other phrases that are semantically similar to it.

6. Conclusion

  • The authors have presented an extension to PPM [6] by incorporating semantic similarities among phrases during phrase prediction and parameter learning steps.
  • As the number of phrases increases, inter-phrase relationships start getting prominent.
  • Due to the phenomenon of “longtail”, available data alone might not be sufficient to learn such complex relationships, and thus arises the need of bringing-in knowledge from other sources.
  • The authors have tried to perform this using WordNet.
  • To the best of their knowledge, this is the attempt of its kind in this domain, and can be integrated with other similar models as well.

Did you find this useful? Give us your feedback

Figures (4)

Content maybe subject to copyright    Report

Generating Image Descriptions Using Semantic Similarities in the Output Space
Yashaswi Verma Ankush Gupta Prashanth Mannem C. V. Jawahar
International Institute of Information Technology, Hyderabad, India
Abstract
Automatically generating meaningful descriptions for
images has recently emerged as an important area of re-
search. In this direction, a nearest-neighbour based gener-
ative phrase prediction model (PPM) proposed by (Gupta
et al. 2012) was shown to achieve state-of-the-art results on
PASCAL sentence dataset, thanks to the simultaneous use
of three different sources of information (i.e. visual clues,
corpus statistics and available descriptions). However, they
do not utilize semantic similarities among the phrases that
might be helpful in relating semantically similar phrases
during phrase relevance prediction. In this paper, we ex-
tend their model by considering inter-phrase semantic sim-
ilarities. To compute similarity between two phrases, we
consider similarities among their constituent words deter-
mined using WordNet. We also re-formulate their objective
function for parameter learning by penalizing each pair of
phrases unevenly, in a manner similar to that in structured
predictions. Various automatic and human evaluations are
performed to demonstrate the advantage of our “semantic
phrase prediction model” (SPPM) over PPM.
1. Introduction
Along with the outburst of digital photographs on the In-
ternet as well as in personal collections, there has been a
parallel growth in the amount of images with relevant and
more or less structured captions. This has opened-up new
dimensions to deploy machine learning techniques to study
available descriptions, and build systems to describe new
images automatically. Analysis of available image descrip-
tions would help to figure out possible relationships that
exist among different entities within a sentence (e.g. ob-
ject, action, preposition, etc.). However, even for simple
images, automatically generating such descriptions may be
quite complex, thus suggesting the hardness of the problem.
Recently, there have been few attempts in this direc-
tion [2, 6, 8, 9, 12, 15, 17, 24]. Most of these approaches
rely on visual clues (global image features and/or trained
detectors and classifiers) and generate descriptions in an in-
dependent manner. This makes such methods susceptible
to linguistic errors during the generation step. An attempt
towards addressing this was made in [6] using a nearest-
neighbour based model. This model utilizes image descrip-
tions at hand to learn different language constructs and con-
straints practiced by humans, and associates this informa-
tion with visual properties of an image. It extracts linguis-
tic phrases of different types (e.g. “white aeroplane”, “aero-
plane at airport”, etc.) from available sentences, and uses
them to describe new images. The underlying hypothesis
of this model is that an image inherits the phrases that are
present in the ground-truth of its visually similar images.
This simple but conceptually coherent hypothesis resulted
in state-of-the-art results on
PASCAL-sentence dataset [19]
1
.
However, this hypothesis has its limitations as well. One
such limitation is the ignorance of semantic relationships
among the phrases; i.e., presence of one phrase should trig-
ger presence of other phrases that are semantically similar
to it. E.g., consider a set of three phrases {“kid”, “child”,
“building”}, an image J and its neighbouring image I.If
the image I has the phrase “kid” in its ground-truth, then
according to the model of [6], it will get associated with J
with some probability, while (almost) ignoring the remain-
ing phrases. However, if we look at these phrases, then it
can be easily noticed that the phrases “kid” and “child” are
semantically very similar, whereas the phrases “child” and
“building” are semantically very different. Thus, it would
not be justifiable to treat the phrases “child” and “building”
as equally absent. That is to say, presence of “kid” should
also indicate the presence of the phrase “child”. From the
machine learning perspective, this relates with the notion of
predicting structured outputs [21]. Intuitively, it asserts that
given a true (or positive) label and a set of false (or negative)
labels, each negative label should be penalized unevenly de-
pending on its (dis)similarity with the true label.
In this paper, we try to address this limitation of the
phrase prediction model (
PPM) of [6]. For this, we propose
two extensions to
PPM. First, we modify their model for
predicting a phrase given an image. This is performed by
considering semantic similarities among the phrases. And
second, we propose a parameter learning formulation in the
nearest-neighbour set-up that takes into account the relation
1
http://vision.cs.uiuc.edu/pascal-sentences/
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops
978-0-7695-4990-3/13 $26.00 © 2013 IEEE
DOI 10.1109/CVPRW.2013.50
288
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops
978-0-7695-4990-3/13 $26.00 © 2013 IEEE
DOI 10.1109/CVPRW.2013.50
288
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops
978-0-7695-4990-3/13 $26.00 © 2013 IEEE
DOI 10.1109/CVPRW.2013.50
288
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops
978-0-7695-4990-3/13 $26.00 © 2013 IEEE
DOI 10.1109/CVPRW.2013.50
288

(structure) present in the output space. This is a generic
formulation and can be used/extended to other scenarios
(such as metric learning in nearest-neighbour based meth-
ods [23]) where structured prediction needs to be performed
using some nearest-neighbour based model. Both of our ex-
tensions utilize semantic similarities among phrases deter-
mined using WordNet [3]. Since our model relies on con-
sideration of semantics among phrases during prediction,
we call it “semantic phrase prediction model” (or
SPPM).
We perform several automatic and human evaluations to
demonstrate the advantage of
SPPM over PPM.
2. Related Works
Here we discuss some of the notable contributions in
this domain. In [25], a semi-automatic method is proposed
where first an image is parsed and converted into a semantic
representation, which is then used by a text parse engine to
generate image description. The visual knowledge is rep-
resented using a parse graph which associates objects with
WordNet synsets to acquire categorical relationships. Us-
ing this, they are able to compose new rule-based grounded
symbols (e.g., “zebra” = “horse” + “stripes”). In [8], they
use trained detectors and classifiers to predict the objects
and attributes present in an image, and simple heuristics to
figure out the preposition between any two objects. These
predictions are then combined with corpus statistics (fre-
quency of a term in a large text corpus, e.g. Google) and
given as an input to a
CRF model. The final output is a set
of objects, their attributes and a preposition for each pair of
objects, which are then mapped to a sentence using a sim-
ple template-based approach. Similar to this, [24] relies on
detectors and classifiers to predict upto two objects and the
overall scene of an image. Along with preposition, they
also predict the action performed by subject; and combine
the predictions using an
HMM model. In [12], the outputs of
object detectors are combined with frequency counts of dif-
ferent n-grams (n 5) obtained using the Google-1T data.
Their phrase fusion technique specifically infuses some cre-
ativity into the output descriptions. Another closely related
work with similar motivation is [15].
One of the limitations of most of these methods is that
they don’t make use of available descriptions. This may
help in avoiding generation of noisy/absurd descriptions
(e.g. “person under road”). Two recent methods [6, 9] try to
address this issue by making use of higher-level language
constructs, called phrases. A phrase is a collection of syn-
tactically ordered words that is semantically meaningful and
complete on its own (e.g., “person pose”, “cow in field”,
etc.)
2
. In [9], phrases are extracted from the dataset pro-
posed in [17]. Then, an integer-programming based formu-
lation is used that fuses visual clues with words and phrases
2
The term ‘phrase’ is used in a more general sense, and is different
from the linguistic sense of phrase.
to generate sentences. In [6], a nearest-neighbour based
model is proposed that simultaneously integrates three dif-
ferent sources of information, i.e. visual clues, corpus
statistics and available descriptions. They use linguistic
phrases extracted from available sentences to construct de-
scriptions for new images. These two models are closely re-
lated with the notion of visual phrases [20], which says that
it is more meaningful to detect visual phrases (e.g. “person
next to car”) than individual objects in an image.
Apart from these, there are few other methods that di-
rectly transfer one or more complete sentences from a col-
lection of sentences. E.g., the method proposed in [17]
transfers multiple descriptions from some other images to
a given image. They discuss two ways to perform this:
(i) using global image features to find similar images, and
(ii) using detectors to re-rank the descriptions obtained af-
ter the first step. Their approach mainly relies on a very
large collection of one million captioned images. Similar
to [17], in [2] also a complete sentence from the training
image descriptions is transferred by mapping a given (test)
image and available descriptions into a “meaning space” of
the form (object, action, scene). This is done using a re-
trieval based approach combined with an
MRF model.
3. Phrase Prediction Model
In this section, we briefly discuss PPM [6]. Given im-
ages and corresponding descriptions, a set of phrases Y is
extracted using all the descriptions. These phrases are re-
stricted to ve different types (considering “subject” and
“object” as equivalent for practical purposes): (object),
(attribute, object), (object, verb), (verb, prep, object),
and (object, prep, object). The dataset takes the form
T = {(I
i
,Y
i
)}, where I
i
is an image and Y
i
⊆Yis its
set of phrases. Each image I is represented using a set of
n features {f
1,I
,...,f
n,I
}. Given two images I and J, dis-
tance between them is computed using a weighted sum of
distances corresponding to each feature as:
D
I,J
= w
1
d
1
I,J
+ ...+ w
n
d
n
I,J
= w · d
I,J
, (1)
where w
i
0 denotes the weight corresponding to i
th
fea-
ture distance. Using this, for a new image I, its K most
similar images T
K
I
⊆T are picked. Then, the joint proba-
bility of associating a phrase y
i
∈Ywith I is given by:
P (y
i
,I)=
J∈T
K
I
P
T
(J)P
F
(I|J)P
Y
(y
i
|J). (2)
Here, P
T
(J)=1/K denotes the uniform probability of
picking some image J from T
K
I
. P
F
(I|J) denotes the like-
lihood of image I given J, defined as:
P
F
(I|J)=
exp(D
I,J
)
J
∈T
K
I
exp(D
I,J
)
. (3)
289289289289

Finally, P
Y
(y
i
|J) denotes the probability of seeing the
phrase y
i
given image J, and is defined according to [4]:
P
Y
(y
i
|J)=
μ
i
δ
y
i
,
J
+ N
i
μ
i
+ N
. (4)
Here, if y
i
Y
J
, then δ
y
i
,
J
=1and 0 otherwise. N
i
is the
(approximate) Google count of the phrase y
i
, N denotes
the sum of Google counts of all phrases in Y that are of
the same type as that of y
i
, and μ
i
0 is the smoothing
parameter. The motivation behind using Google counts of
phrases is to smooth their relative frequencies.
In order to learn the two sets of parameters (i.e., the
weights w
i
s and smoothing parameters μ
i
s), an objective
function analogus to [23] is used. Given an image J along
with its true prhases Y
J
, the goal is to learn the parame-
ters such that (i) the probability of predicting the phrases
in Y\Y
J
should be minimized, and (ii) the probability of
predicting each phrase in Y
J
should be more than any other
phrase. Precisely, we minimize the following function:
e =
J,y
k
P (y
k
,J)+λ
(J,y
k
,y
j
)∈M
(P (y
k
,J) P (y
j
,J)).
(5)
Here, y
j
Y
J
, y
k
∈Y\Y
J
, M is the set of triples that vio-
late the second constraint stated above, and λ>0 is used to
manage the trade-off betweent the two terms. The objective
function is optimized using a gradient descent method, by
learning w
i
s and μ
i
s in an alternate manner.
Using equation 2, a ranked list of phrases is obtained,
which are then integrated to produce triples of the form
{((attribute1, object1), verb), (verb,prep, (attribute2,
object2)), (object1, prep, object2)}. These are then
mapped to simple sentences using SimpleNLG [5].
4. Semantic Phrase Prediction Model
As discussed before, one of the limitations of PPM is that
it treats phrases in a binary manner; i.e., in equation 4, either
δ
y
i
,
J
is 1 or 0 depending on presence or absence of y
i
in
Y
J
. This results in penalizing semantically similar phrases
(e.g. “person” vs. “man”). Here we extend this model by
considering semantic similarities among phrases. To begin
with, first we discuss how to compute semantic similarities.
4.1. Computing Semantic Similarities
Let a
1
and a
2
be two words (e.g. “boy” and “man”). We
use WordNet based
JCN simiarity measure [7] to compute
semantic simiarity between the words a
1
and a
2
3
. WordNet
is a large lexical database of English where words are inter-
linked in a hierarchy based on their semantic and lexical re-
lationships. Given a pair of words (a
1
, a
2
), the JCN similar-
ity measure returns a score s
a
1
a
2
in the range [0, inf), with
3
Using the code available at http://search.cpan.org/CPAN/authors/id/T/
TP/TPEDERSE/WordNet-Similarity-2.05.tar.gz
higher score corresponding to larger similarity and vice-
versa. This similarity score is then mapped into the range
[0, 1] using the following non-linear transformation as de-
scribed in [11] (denoting s
a
1
a
2
by s in short):
γ(s)=
1 s 0.1
0.6 0.4 sin(
25π
2
s +
3
4
π) s (0.06, 0.1)
0.6 0.6 sin(
π
2
(1
1
3.471s+0.653
)) s 0.06
Using this, we define a similarity function that takes two
words as input and returns the semantic similarity score be-
tween them computed using the above equation as:
W
sim
(a
1
,a
2
)=γ(s
a
1
a
2
) (6)
From this, we compute semantic dissimilarity score as:
W
sim
(a
1
,a
2
)=1 W
sim
(a
1
,a
2
) (7)
Based on equation 6, we define sematic similarity be-
tween two phrases (of the same type) as V
sim
, which is an
average of the semantic similarity between each of their cor-
responding constituting terms. E.g., if we have two phrases
v
1
=(“person”, “walk”) and v
2
=(“boy”, “run”) of the type
(object, verb), then their semantic similarity score will be
given by V
sim
(v
1
,v
2
)=0.5 (W
sim
(“person”,“boy”)+
W
sim
(“walk”,“run”)). It should be noted that we cannot
compute semantic similarity between two prepositions us-
ing WordNet. So, while computing semantic simiarity be-
tween two phrases that contain prepositions in them (i.e.,
of type (verb, prep, object) or (object, prep, object)), we
do not consider the prepositions. Analogous to equation 7,
we can compute semantic dissimilarity score between two
phrases as
V
sim
(v
1
,v
2
)=1 V
sim
(v
1
,v
2
). Finally, given
a phrase y
i
and a set of phrases Y of the same type as that
of y
i
, we define semantic similarity between them as
U
sim
(y
i
,Y) = max
y
j
Y
V
sim
(y
i
,y
j
). (8)
In practice, if |Y | =0then we set U
sim
(y
i
,Y)=0.
Also, in order to emphasize more on an exact match, we
set U
sim
(y
i
,Y) to exp(1) if y
i
Y in the above equation.
4.2. SPPM
In order to benefit from semantic similarity between two
phrases while predicting relevance of some given phrase y
i
with Y
J
of image J, we need to modify equation 4 accord-
ingly. Let y
i
be of type t, and the set of phrases of type t in
Y
J
be Y
t
J
Y
J
. Then, we re-define P
Y
(y
i
|J) as:
P
Y
(y
i
|J)=
μ
i
δ
y
i
,
J
+ N
i
μ
i
+ N
, (9)
where δ
y
i
,
J
= U
sim
(y
i
,Y
t
J
). This means that when
y
i
/ Y
t
J
, we look for that phrase in Y
t
J
that is seman-
tically most similar to y
i
and use their similarity score,
290290290290

PPM [6] SPPM (this work)
Figure 1. Difference between the two models. In PPM, the conditional probability of a phrase y
i
given an image J depends on whether
that phrase is present in the ground-truth phrases of J (i.e. Y
J
) or not. When the phrase is not present, corresponding δ
y
i
,
J
(equation 4)
becomes zero without considering the semantic similarity of y
i
with other phrases in Y
J
. This limitation of PPM is addressed in SPPM by
finding the phrase in Y
J
that is semantically most similar to y
i
and using their similarity score instead of zero. In the above example, we
have Y
J
= {“bus”, “road”, ”street”}. Given a phrase y
i
= “highway”, δ
y
i
,
J
=0according to PPM. Whereas δ
y
i
,
J
=0.8582 according
to
SPPM (equation 9) by considering the similarity of “highway” with “road” (i.e., V
sim
(“highway”, “road” )=0.8582).
rather than putting a zero. Such a definition allows us to
take into account the structure/semantic inter-dependence
among phrases while predicting the relevance of a phrase.
Since we have modified the conditional probablity model
for predicting a phrase given an image, we also need to up-
date the objective function of equation 5 accordingly. Given
an image J along with its true prhases y
j
’s in Y
J
,nowwe
additionally need to ensure that the penalty imposed for a
higher relevance score of some phrase y
k
∈Y\Y
J
than
any phrase y
j
Y
J
should also depend on the semantic
similarity between y
j
and y
k
. This is similar to the notion
of predicting structured outputs as discussed in [21]. Pre-
cisely, we re-define the objective function as:
e =
J,y
k
P (y
k
,J)+λ
(J,y
k
,y
j
)∈M
Δ(J, y
k
,y
j
), (10)
Δ(J, y
k
,y
j
)=
V
sim
(y
k
,y
j
)(P (y
k
,J) P (y
j
,J)). (11)
The implication of Δ(·) is that if two phrases are semanti-
cally similar (e.g. “kid” and “child”), then penalty should be
small and vice-versa. This objective function looks similar
to that used in [22] for metric learning in nearest neighbour
scenario. The major difference being that there the objec-
tive function is defined over samples, and penalty is based
on semantic similarity between two samples (proportional
to number of labels they share). Whereas, here the objec-
tive function is defined over phrases, and penalty is based
on semantic similarity between two phrases.
5. Experiments
5.1. Experimental Details
We follow the same experimental set-up as in [6], and
use
UIUC PASCAL sentence dataset [19] for evaluation. It
has 1, 000 images and each image is described using 5 in-
dependent sentences. These sentences are used to extract
different types of phrases using “collapsed-ccprocessed-
dependencies” in the Stanford CoreNLP toolkit [1]
4
,giv-
ing 12, 865 distinct phrases. In order to consider synonyms,
WordNet synsets are used to expand each noun upto 3 hy-
ponym levels resulting in a reduced set of 10, 429 phrases.
Similar to [6], we partition the dataset into 90% training
and 10% testing for learning the parameters, and repeat this
over 10 partitions in order to generate descriptions for all the
images. During relevance prediction, we consider K =15
nearest-neighbours from the training data.
For image representation, we use a set of colour (
RGB
and HSV), texture (Gabor and Haar), scene (GIST [16]) and
shape (
SIFT [14]) descriptors computed globally. All fea-
tures other than
GIST are also computed over three equal
horizontal and vertical partitions [10]. This gives a set of 16
features per image. While computing distance between two
images (equation 1), L1 distance is used for colour, L2 for
scene and texture, and χ
2
for shape features.
5.2. Evaluation Measures
In our experiments, we perform both automatic as well
as human evaluations for performance analysis.
5.2.1 Automatic Evaluation
For this we use the
BLEU [18] and Rouge [13] metrics.
These are frequently used for evaluations in the areas of ma-
chine translation and automatic summarization respectively.
5.2.2 Human Evaluation
Automatically describing an image is significantly different
from machine translation or summary generation. Since an
image can be described in several ways, it is not justifiable
4
http://nlp.stanford.edu/software/corenlp.shtml
291291291291

Approach BLEU-1 Score Rouge-1 Score
BabyTalk [8] 0.30 -
CorpusGuided [24] - 0.44
PPM [6] w/ syn. 0.41 0.28
PPM [6] w/o syn. 0.36 0.21
SPPM w/ syn. 0.43 0.29
SPPM w/o syn. 0.36 0.20
Table 1. Automatic evaluation results for sentence generation.
(Higher score means better performance.)
Approach Readability Relevance
PPM [6] w/ syn. 2.84 1.49
PPM [6] w/o syn. 2.75 1.32
SPPM w/ syn. 2.93 1.61
SPPM w/o syn. 2.91 1.39
Table 2. Human evaluation results for “Relevance” and “Readabil-
ity”. (Higher score means better performance.)
to rely just on automatic evaluation, and hence the need for
human evaluation arises. We gather judgements from two
human evaluators on 100 images randomly picked from the
dataset and take their average. The evaluators are asked to
verify three aspects on a likert scale of {1, 2, 3} [6, 12]:
Readability: To measure grammatical correctness of gen-
erated description by giving the following ratings: (1) Ter-
rible, (2) Mostly comprehensible with some errors, (3)
Mostly perfect English sentence.
Relevance: To measure the semantic relevance of the gen-
erated sentence by giving the following ratings: (1) Totally
off, (2) Reasonably relevant, (3) Very relevant.
Relative Relevance: We also try to analyze the relative
relevance of descriptions generated using
PPM and SPPM.
Corresponding to each image, we present the descriptions
generated using these two models to the human evaluators
(without telling them that they are generated using two dif-
ferent models) and collect judgements based on the follow-
ing ratings: (1) Description generated by
PPM is more rel-
evant, (2) Description generated by
SPPM is more relevant,
(3) Both descriptions are equally relevant/irrelevant.
5.3. Results and Discussion
5.3.1 Quantitative Results
Table 1 shows the results corresponding to automatic eval-
uations. It can be noticed that
SPPM shows comparable or
superior performance than
PPM. One important thing that
we would like to point out is that it is not fully justifiable to
directly compare our results with those of [8] and [24]. This
is because the data (i.e., the fixed sets of objects, preposi-
tions, verbs) that they use for composing new sentences is
very much different from that of ours. However, in [6], it
PPM [6] count SPPM count Both/None count
w/ syn. 16 28 56
w/o syn. 21 25 54
Table 3. Human evaluation results for “Relative Relevance”. Last
column denotes the number of times descriptions generated using
the two methods were judged as equally relevant or irrelevant with
given image. (Larger count means better performance.)
PPM:
(1) flap (2) csa (3) symbol (4) air-
craft (5) slope (6) crag (7) villa (8) biplane
(9) distance (10) sky
SPPM:
(1) aeroplane (2) airplane (3) plane
(4) sky (5) boat (6) water (7) air (8) aircraft
(9) jet (10) gear
Figure 3. Example image from the PASCAL sentence dataset along
with the top ten “objects” predicted using the two models.
was shown that when same data is used, PPM performs bet-
ter than both of these. Since the data that we use in our
experiments is exactly the same as that of
PPM, and SPPM
performs comparable or better than PPM, we believe that un-
der the same experimental set-up, our model would perform
better than both [8] and [24]. Also, we are not comparing
with other works because since this is an emerging domain,
different works have used either different evaluation mea-
sures (such as [2]), or experimental set-up (such as [15]), or
even datasets (such as [9, 17]). In conclusion, our results
are directly comparable only with
PPM [6].
5.3.2 Qualitative Results
Human evaluation results corresponding to “Readability”
and “Relevance” are shown in Table 2. Here, we can no-
tice that
SPPM consistently performs better than PPM on
all the evaluation metrics. This is because
SPPM takes into
account semantic similarities among the phrases, which in
turn results in generating more coherent descriptions than
PPM. This is also highlighted in Figure 2 that shows ex-
ample descriptions generated using
PPM and SPPM. It can
be noticed that the words in descriptions generated using
SPPM usually show semantic connectedness; which is not
always the case with
PPM. E.g., compare the descriptions
obtained using
PPM (in the second row) with those obtained
using
SPPM (in the fourth row) for the last three images.
In Table 3, results corresponding to “Relative Relevance”
are shown. In this case also,
SPPM always performs better
than
PPM. This means that the descriptions generated using
SPPM are semantically more relevant than those using PPM.
In Figure 3, we try to get some insight about how the
internal functioning of
SPPM is different from that of PPM.
For this, we show the top ten phrases of the type “object”
292292292292

Citations
More filters
Proceedings ArticleDOI
13 Oct 2015
TL;DR: This work presents a probabilistic approach that seamlessly integrates visual and textual information for the image retrieval task, and relies on linguistically and syntactically motivated mid-level textual patterns (or phrases) that are automatically extracted from available descriptions.
Abstract: We address the problem of image retrieval using textual queries. In particular, we focus on descriptive queries that can be either in the form of simple captions (e.g., ``a brown cat sleeping on a sofa''), or even long descriptions with multiple sentences. We present a probabilistic approach that seamlessly integrates visual and textual information for the task. It relies on linguistically and syntactically motivated mid-level textual patterns (or phrases) that are automatically extracted from available descriptions. At the time of retrieval, the given query is decomposed into such phrases, and images are ranked based on their joint relevance with these phrases. Experiments on two popular datasets (UIUC Pascal Sentence and IAPR-TC12 benchmark) demonstrate that our approach effectively retrieves semantically meaningful images, and outperforms baseline methods.

Cites background or methods from "Generating Image Descriptions Using..."

  • ...Conceptually, our work closely relates with the image description generation methods [6, 17], and demonstrates their application to the image retrieval task given descriptive textual queries....

    [...]

  • ...This is computed using the procedure described in [17], which is based on WordNet based similarity between the individual terms of the two phrases....

    [...]

  • ...Among these, there are two popular practices: either to generate a description given an image [6, 16, 17], or to retrieve one from a collection of available descriptions [11, 14, 18]....

    [...]

Dissertation
07 Oct 2017
TL;DR: This thesis shows how a pose evaluator can be used to filter incorrect pose estimates, to fuse outputs from different HPE algorithms, and to improve a pose search application, and introduces deep poselets for pose-sensitive detection of various body parts that are built on convolutional neural network (CNN) features.
Abstract: With overwhelming amount of visual data on the internet, it is beyond doubt that a search capability for this data is needed. While several commercial systems have been built to retrieve images and videos using meta-data, many of the images and videos do not have a detailed descriptions. This problem has been addressed by content based image retrieval (CBIR) systems which retrieve the images by their visual content. In this thesis, we will demonstrate that images and videos can be retrieved using the pose of the humans present in them. Here pose is the 2D/3D spatial arrangement of anatomical body parts like arms and legs. Pose is an important information which conveys action, gesture and the mood of the person. Retrieving humans using pose has commercial implications in domains such as dance (query being a dance pose) and sports (query being a shot). In this thesis, we propose three pose representations that can be used for retrieval. Using one of these representations, we will build a real-time pose retrieval system over million movie frames. Our first pose representation is based on the output of human pose estimation algorithms (HPE) [5, 26, 80, 103] which estimate the pose of the person given an image. Unfortunately, these algorithms are not entirely reliable and often make mistakes. We solve this problem by proposing an evaluator that predicts if a HPE algorithm has succeeded. We describe the stages required to learn and test an evaluator, including the use of an annotated ground truth dataset for training and testing the evaluator, and the development of auxiliary features that have not been used by the (HPE) algorithm, but can be learnt by the evaluator to predict if the output is correct or not. To demonstrate our ideas, we build an evaluator for each of four recently developed HPE algorithms using their publicly available implementations: Andriluka et al. [5], Eichner and Ferrari [26], Sapp et al. [80], and Yang and Ramanan [103]. We demonstrate that in each case our evaluator is able to predict whether the algorithm has correctly estimated the pose or not. In this context we also provide a new dataset of annotated stickmen. Further, we propose innovative ways in which a pose evaluator can be used. Specifically, we show how a pose evaluator can be used to filter incorrect pose estimates, to fuse outputs from different HPE algorithms, and to improve a pose search application. Our second pose representation is inspired by poselets [13] which are body detectors. First, we introduce deep poselets for pose-sensitive detection of various body parts, that are built on convolutional neural network (CNN) features. These deep poselets significantly outperform previous instantiations of Berkeley poselets [13]. Second, using these detector responses, we construct a pose representation that is suitable for pose search, and show that pose retrieval performance is on par with the state of the art

Cites background from "Generating Image Descriptions Using..."

  • ...Some works even generate phrases [47] and sentences [97]....

    [...]

Book ChapterDOI
26 Dec 2020
TL;DR: An attention based model that automatically performs the image caption generation using hard stochastic attention and soft deterministic attention is proposed and validated on the standard MSCOCO dataset.
Abstract: With the recent advancements in object detection and machine translation techniques, we proposed an attention based model that automatically performs the image caption generation. We have used two kinds of attention mechanisms named hard stochastic attention and soft deterministic attention. We can train the model using backpropagation techniques and our model concentrates on the important objects present in the image. By considering these salient objects it can generate the corresponding captions word by word as the output sequence of LSTM. We validated our model on the standard MSCOCO dataset.
References
More filters
Journal ArticleDOI
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

46,906 citations

Proceedings ArticleDOI
06 Jul 2002
TL;DR: This paper proposed a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run.
Abstract: Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.

21,126 citations

01 Jan 2011
TL;DR: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.
Abstract: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.

14,708 citations


"Generating Image Descriptions Using..." refers methods in this paper

  • ...For image representation, we use a set of colour (RGB and HSV), texture (Gabor and Haar), scene (GIST [16]) and shape (SIFT [14]) descriptors computed globally....

    [...]

Journal ArticleDOI
01 Sep 2000-Language
TL;DR: The lexical database: nouns in WordNet, Katherine J. Miller a semantic network of English verbs, and applications of WordNet: building semantic concordances are presented.
Abstract: Part 1 The lexical database: nouns in WordNet, George A. Miller modifiers in WordNet, Katherine J. Miller a semantic network of English verbs, Christiane Fellbaum design and implementation of the WordNet lexical database and searching software, Randee I. Tengi. Part 2: automated discovery of WordNet relations, Marti A. Hearst representing verb alterations in WordNet, Karen T. Kohl et al the formalization of WordNet by methods of relational concept analysis, Uta E. Priss. Part 3 Applications of WordNet: building semantic concordances, Shari Landes et al performance and confidence in a semantic annotation task, Christiane Fellbaum et al WordNet and class-based probabilities, Philip Resnik combining local context and WordNet similarity for word sense identification, Claudia Leacock and Martin Chodorow using WordNet for text retrieval, Ellen M. Voorhees lexical chains as representations of context for the detection and correction of malapropisms, Graeme Hirst and David St-Onge temporal indexing through lexical chaining, Reem Al-Halimi and Rick Kazman COLOR-X - using knowledge from WordNet for conceptual modelling, J.F.M. Burg and R.P. van de Riet knowledge processing on an extended WordNet, Sanda M. Harabagiu and Dan I Moldovan appendix - obtaining and using WordNet.

13,049 citations


"Generating Image Descriptions Using..." refers methods in this paper

  • ...We use WordNet based JCN simiarity measure [7] to compute semantic simiarity between the words a1 and a23....

    [...]

  • ...It should be noted that we cannot compute semantic similarity between two prepositions using WordNet....

    [...]

  • ...Given a pair of words (a1, a2), the JCN similarity measure returns a score sa1a2 in the range [0, inf), with 3Using the code available at http://search.cpan.org/CPAN/authors/id/T/ TP/TPEDERSE/WordNet-Similarity-2.05.tar.gz higher score corresponding to larger similarity and viceversa....

    [...]

  • ...In this work, we have tried to perform this using WordNet....

    [...]

  • ...Both of our extensions utilize semantic similarities among phrases determined using WordNet [3]....

    [...]

Proceedings ArticleDOI
17 Jun 2006
TL;DR: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence that exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories.
Abstract: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting "spatial pyramid" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s "gist" and Lowe’s SIFT descriptors.

8,736 citations


Additional excerpts

  • ...All features other than GIST are also computed over three equal horizontal and vertical partitions [10]....

    [...]

Frequently Asked Questions (14)
Q1. What have the authors contributed in "Generating image descriptions using semantic similarities in the output space" ?

In this paper, the authors extend their model by considering inter-phrase semantic similarities. To compute similarity between two phrases, the authors consider similarities among their constituent words determined using WordNet. 

In order to consider synonyms, WordNet synsets are used to expand each noun upto 3 hyponym levels resulting in a reduced set of 10, 429 phrases. 

The final output is a set of objects, their attributes and a preposition for each pair of objects, which are then mapped to a sentence using a simple template-based approach. 

The underlying hypothesis of this model is that an image inherits the phrases that are present in the ground-truth of its visually similar images. 

This is because SPPM takes into account semantic similarities among the phrases, which in turn results in generating more coherent descriptions than PPM. 

The visual knowledge is represented using a parse graph which associates objects with WordNet synsets to acquire categorical relationships. 

This model utilizes image descriptions at hand to learn different language constructs and constraints practiced by humans, and associates this information with visual properties of an image. 

due to the phenomenon of “longtail”, available data alone might not be sufficient to learn such complex relationships, and thus arises the need of bringing-in knowledge from other sources. 

WordNet is a large lexical database of English where words are interlinked in a hierarchy based on their semantic and lexical relationships. 

the authors are not comparing with other works because since this is an emerging domain, different works have used either different evaluation measures (such as [2]), or experimental set-up (such as [15]), or even datasets (such as [9, 17]). 

They discuss two ways to perform this: (i) using global image features to find similar images, and (ii) using detectors to re-rank the descriptions obtained after the first step. 

PY(yi|J) denotes the probability of seeing the phrase yi given image J , and is defined according to [4]:PY(yi|J) = μiδyi,J + Niμi + N . (4)Here, if yi ∈ YJ , then δyi,J = 1 and 0 otherwise. 

if the authors have two phrases v1=(“person”, “walk”) and v2=(“boy”, “run”) of the type (object, verb), then their semantic similarity score will be given by Vsim(v1, v2) = 0.5 ∗ (Wsim(“person”,“boy”) + Wsim(“walk”,“run”)). 

Using equation 2, a ranked list of phrases is obtained, which are then integrated to produce triples of the form {((attribute1, object1), verb), (verb, prep, (attribute2, object2)), (object1, prep, object2)}.