Movie Description
Summary (10 min read)
1 Introduction
- Audio descriptions (ADs) make movies accessible to millions of blind or visually impaired people.
- The combination of large datasets and convolutional neural networks (CNNs) has been particularly potent (Krizhevsky et al. 2012).
- AD narrations are carefully positioned within movies to fit in the natural pauses in the dialogue and are mixed with the original movie soundtrack by professional post-production.
- As a first study on their dataset the authors benchmark several approaches for movie description.
- It first builds robust visual classifiers which distinguish verbs, objects, and places extracted from weak sentence annotations.
2.1 Image Description
- Much of the recent work has relied on Recurrent Neural Networks (RNNs) and in particular on long short-termmemory networks .
- New datasets have been released, such as the Flickr30k (Young et al. 2014) and MS COCO Captions (Chen et al. 2015), where Chen et al. (2015) also presents a standardized protocol for image captioning evaluation.
- Other work has analyzed the performance of recent methods, e.g. Devlin et al. (2015) compare themwith respect to the novelty of generated descriptions, while also exploring a nearest neighbor baseline that improves over recent methods.
2.2 Video Description
- In the past video description has been addressed in controlled settings (Barbu et al. 2012; Kojima et al. 2002), on a small scale (Das et al.
- Donahue et al. (2015) first proposed to describe videos using an LSTM, relying on precomputed CRF scores from Rohrbach et al. (2014).
- To handle the challenging scenario of movie description, Yao et al. (2015) propose a soft-attention based model which selects the most relevant temporal segments in a video, incorporates 3-D CNN and generates a sentence using an LSTM.
- Venugopalan et al. (2016) explore the benefit of pre-trained word embeddings and language models for generation on large external text corpora.
- Specifically they use dense trajectory features (Wang et al. 2013) extracted for the clips and CNN features extracted at center frames of the clip.
2.3 Movie Scripts and Audio Descriptions
- Movie scripts have been used for automatic discovery and annotation of scenes and human actions in videos (Duchenne et al. Bojanowski et al. (2013) approach the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts.
- They rely on the semantic parser SEMAFOR (Das et al. 2012) trained on the FrameNet database (Baker et al. 1998), however, they limit the recognition only to two frames.
- ADs have also been used to understand which characters interactwith eachother (Salway et al. 2007).
- Their corpus is based on the original sources to create the ADs and contains different kinds of artifacts not present in actual description, such as dialogs and production notes.
2.4 Works Building on Our Dataset
- Interestingly, other works, datasets, and challenges are already building upon their data.
- Zhu et al. (2015b) learn a visual-semantic embedding from their clips and ADs to relate movies to books.
- Bruni et al. (2016) also learn a joint embedding of videos and descriptions and use this representation to improve activity recognition on the Hollywood 2 dataset Marszalek et al. (2009).
- Tapaswi et al. (2016) use their AD transcripts for building their MovieQA dataset, which asks natural language questions about movies, requiring an understanding of visual and textual information, such as dialogue and AD, to answer the question.
- Zhu et al. (2015a) present a fill-in-the-blank challenge for audio description of the current, previous, and next sentence description for a given clip, requiring to understand the temporal context of the clips.
3 Datasets for Movie Description
- In the following, the authors present how they collect their data for movie description and discuss its properties.
- The Large Scale Movie Description Challenge is based on two datasets which were originally collected independently.
- It consists of AD and script data and uses sentence-level manual alignment of transcribed audio to the actions in the video (Sect. 3.1).
- M-VADwas collected with DVD data quality and only relies on AD.
- It includes a submission server for evaluation on public and blind test sets.
3.1.1 Collection of ADs
- The authors search for Blu-ray movies with ADs in the “Audio Description” section of the British Amazon4 and select 55 movies of diverse genres (e.g. drama, comedy, action).
- Then the authors semi-automatically segment out the sections of the AD audio (which is mixed with the original audio stream) with the approach described below.
- The audio segments are then transcribed by a crowd-sourced transcription service7 that also provides us the time-stamps for each spoken sentence.
- The precise alignment is important to compute the similarity of both streams.
- The authors smooth this decision over time using aminimum segment length of 1 s.
3.1.2 Collection of Script Data
- In addition to the ADs the authors mine script web resources8 and select 39 movie scripts.
- As starting point the authors use the movie scripts from “Hollywood2” (Marszalek et al. 2009) that have highest alignment scores to their movie.
- The authors found that the “overlap” is quite narrow, so they analyze 11 such movies in their dataset.
- The authors follow existing approaches 8 http://www.weeklyscript.com, http://www.simplyscripts.com, http:// www.dailyscript.com, http://www.imsdb.com.
- Then the authors use the dynamic programming method of Laptev et al. (2008) to align scripts to subtitles and infer the time-stamps for the description sentences.
3.1.3 Manual Sentence-Video Alignment
- As the AD is added to the original audio stream between the dialogs, there might be a small misalignment between the time of speech and the corresponding visual content.
- During the manual alignment the authors also filter out: (a) sentences describingmovie introduction/ending (production logo, cast, etc); (b) texts read from the screen; (c) irrelevant sentences describing something not present in the video; (d) sentences related to audio/sounds/music.
- For the movie scripts, the reduction in number of words is about 19%, while for ADs it is under 4%.
- In the case of ADs, filtering mainly happens due to initial/ending movie intervals and transcribed dialogs (when shown as text).
- If the manually aligned video clip is shorter than 2s, the authors symmetrically expand it (from beginning and end) to be exactly 2 s long.
3.1.4 Visual Features
- The authors extract video clips from the full movie based on the aligned sentence intervals.
- As discussed earlier, ADs and scripts describe activities, objects and scenes (aswell as emotions which the authors do not explicitly handle with these features, but they might still be captured, e.g. by the context or activities).
- For each feature (Trajectory, HOG, HOF, MBH) the authors create a codebook with 4,000 clusters and compute the corresponding histograms.
- Finally, the authors use the recent scene classification CNNs (Zhou et al. 2014) featuring 205 scene classes.
- The authors mean-pool over the frames of each video clip, using the result as a feature.
3.2 The Montreal Video Annotation Dataset (M-VAD)
- One of the main challenges in automating the construction of a video annotation dataset derived from AD audio is accurately segmenting the AD output, which is mixed with the original movie soundtrack.
- In Sect. 3.1.1 the authors have introduced away of semi-automatic AD segmentation.
- In this sectionthe authors describe a fully automatic method for AD narration isolation and video alignment.
- When a scene changes rapidly, the narrator will speak multiple sentences without pauses.
- Such content should be kept together 10 mpii.de/movie-description.
3.2.1 Collection of ADs
- To search formovieswithADweuse themovie lists provided in “An Initiative of the American Council of the Blind” 11 and “Media Access Group at WGBH”12 websites, and buy them based on their availability and price.
- To extract video and audio from the DVDs the authors use the DVDfab13 software.
3.2.2 AD Narrations Segmentation Using Vocal Isolation
- Creating a completely automated approach for extracting the relevant narration or annotation from the audio track and refining the alignment of the annotation with the video still poses some challenges.
- Vocal isolation techniques boost vocals, including dialogues and AD narrations while suppressing background movie sound in stereo signals.
- The authors align the movie and AD audio signals by taking an FFT of the two audio signals, compute the cross-correlation, measure similarity for different offsets and select the offset which corresponds to peak cross-correlation.
- Even in cases where the shapes of the standard movie audio signal and standard movie audio mixed with AD signal are very different—due to the AD mixing process—our procedure is sufficient for the automatic segmentation of AD narration.
3.2.3 Movie/AD Alignment and Professional Transcription
- AD audio narration segments are time-stamped based on their automatic AD narration segmentation.
- In order to compen- sate for the potential 1–2s misalignment between the AD narrator speaking and the corresponding scene in the movie, the authors automatically add 2s to the end of each video clip.
- Also the authors discard all the transcriptions related to movie introduction/ending which are located at the beginning and the end of movies.
- In order to obtain high quality text descriptions, the AD audio segments were transcribed with more than 98% transcription accuracy, using a professional transcription service.
- The authors audio narration isolation technique allows us to process the audio into small, well defined time segments and reduce the overall transcription effort and cost.
3.3 The Large Scale Movie Description Challenge (LSMDC)
- To build their Large Scale Movie Description Challenge , the authors combine the M-VAD and MPII-MD datasets.
- Wefirst identify the overlap between the two, so that the same movie does not appear in the training and test set of the joined dataset.
- The authors also exclude script-basedmovie alignments from the validation and test sets ofMPII-MD.
- The authors provide more information about the challenge setup and results in Sect.
- There is a movie annotation track which asks to select the correct sentence out of five in a multiple-choice test, a retrieval track which asks to retrieve the correct test clip for a given sentence, and a fill-in-the-blank track which requires to predict a missing word in a given description and the corresponding clip.
3.4 Movie Description Dataset Statistics
- Table 1 presents statistics for the number of words, sentences and clips in their movie description corpora.
- The authors also report the average/total length of the annotated time intervals.
- The authors combined LSMDC 2015 dataset contains over 118K sentence-clips pairs and 158h of video.
- This split balances movie genres within each set, which is motivated by the fact that the vocabulary used to describe, say, an action movie could be very different from the vocabulary used in a comedy movie.
- To compute the part of speech statistics for their corpora the authors tag and stem all words in the datasets with the Standford Part-OfSpeech (POS) tagger and stemmer toolbox (Toutanova et al. 16 https://codalab.org/.
3.5 Comparison to Other Video Description Datasets
- The authors compare their corpus to other existing parallel video corpora in Table 3.
- The authors look at the following properties: availability of multi-sentence descriptions (long videos described continuously with multiple sentences), data domain, source of descriptions and dataset size.
- Themain limitations of prior datasets include the coverage of a single domain (Das et al.
- Similar toMSVD dataset (Chen and Dolan 2011), MSR-VTT is based on YouTube clips.
- TGIF is a large dataset of 100k image sequences (GIFs) with associated descriptions.
4 Approaches for Movie Description
- Given a training corpus of aligned videos and sentences the authors want to describe a new unseen test video.
- The authors second approach (Sect. 4.2) learns to generate descriptions using long short-termmemory network (LSTM).
- While the first approach does not differentiate which features to use for different labels, their second approach defines different semantic groups of labels and uses most relevant visual features for each group.
- Next, the first approach uses the classifier scores as input to a CRF to predict a semantic representation (SR) (SUBJECT, VERB, OBJECT, LOCATION), and then translates it into a sentencewith SMT.
- Figure 5 shows an overview of the two discussed approaches.
4.1.1 Semantic Parsing
- Learning from a parallel corpus of videos and natural language sentences is challenging when no annotated intermediate representation is available.
- The authors lift the words in a sentence to a semantic space of roles and WordNet (Fellbaum 1998) senses by performing SRL (Semantic Role Labeling) and WSD (Word Sense Disambiguation).
- The authors start by decomposing the typically long sentences present in movie descriptions into smaller clauses using the ClausIE tool (Del Corro and Gemulla 2013).
- Shoot3v (sense killing), the role restriction is Agent.animate V Patient.animate PP Instrument.solid.
- The authors ensure that the selected WordNet verb sense adheres to both the syntactic frame and the semantic role restriction provided by VerbNet.
4.1.2 SMT
- For the sentence generation the authors build on the two-step translation approach of Rohrbach et al. (2013).
- As the first step it learns a mapping from the visual input to the semantic representation (SR), modeling pairwise dependencies in a CRF using visual classifiers as unaries.
- The unaries are trained using an SVM on dense trajectories (Wang and Schmid 2013).
- In the second step it translates the SR to a sentence using Statistical Machine Translation (SMT) (Koehn et al. 2007).
- For this the approach uses a concatenated SR as input language, e.g. cut knife tomato, and natural sentence as output language, e.g.
4.2.1 Robust Visual Classifiers
- For training the authors rely on a parallel corpus of videos and weak sentence annotations.
- To avoid loosing the potential labels in these sentences, the authors match their set of initial labels to the sentences which the parser failed to process.
- Objects and places.the authors.
- Finally, the authors discard labels which the classifiers could not learn reliably, as these are likely noisy or not visual.
- The authors estimate a threshold for the ROC values on a validation set.
4.2.2 LSTM for Sentence Generation
- Werely on the basicLSTMarchitecture proposed inDonahue et al. (2015) for video description.
- The embedding is jointly learned during training of the LSTM.
- The authors feed in the classifier scores as input to the LSTM which is equivalent to the best variant proposed in Donahue et al. (2015).
- The authors compare a 1-layer architecture with a 2-layer architecture.
- To learn a more robust network which is less likely to overfit the authors rely on a dropout (Hinton et al. 2012), i.e. a ratio r of randomly selected units is set to 0 during training (while all others are multiplied with 1/r ).
5 Evaluation on MPII-MD and M-VAD
- In this section the authors evaluate and provide more insights about their movie description datasets MPII-MD and M-VAD.
- The authors compare ADs to movie scripts (Sect. 5.1), present a short evaluation of their semantic parser (Sect. 5.2), present the automatic and human evaluation metrics for description (Sect. 5.3) and then benchmark the approaches to video description introduced inSect.
- The authors conclude this section with an analysis of the different approaches (Sect. 5.5).
- In Sect. 6 the authors will extend this discussion to the results of the Large Scale Movie Description Challenge.
5.1 Comparison of AD Versus Script Data
- The authors compare the AD and script data using 11movies from the MPII-MD dataset where both are available (see Sect. 3.1.2).
- For these movies the authors select the overlapping time intervals with an intersection over union overlap of at least 75%,which results in 279 sentence pairs, they remove 2 pairs which have idendical sentences.
- Table 5 presents the results of this evaluation.
- Looking at the more strict evaluation where at least 4 out of 5 judges agree (in brackets in Table 5) there is still a significant margin of 24.5% between ADs and movie scripts for Correctness, and 28.1% for Relevance.
- This evaluation supports their intuition that scrips contain mistakes and irrelevant content even after being cleaned up and manually aligned.
5.2 Semantic Parser Evaluation
- The authors empirically evaluate the various components of the semantic parsing pipeline, namely, clause splitting , POS tagging and chunking (NLP), semantic role labeling , and, word sense disambiguation (WSD).
- The authors randomly sample 101 sentences from theMPII-MD dataset over which they perform semantic parsing and log the outputs at various stages of the pipeline (similar to Table4).
- The authors let three human judges evaluate the results for every token in the clause (similar to evaluating every row in Table 4) with a correct/ incorrect label.
- It is evident that the poorest performing parts are the NLP and the WSD components.
- Some of the NLP mistakes arise due to incorrect POS tagging.
5.3.1 Automatic Metrics
- For automatic evaluation the authors rely on the MS COCO Caption Evaluation API.19.
- The authors also use the recently proposed evaluation measure SPICE (Anderson et al. 2016), which aims to compare the semantic content of two descriptions, by matching the information contained in dependency parse trees for both descriptions.
- While the authors report all measures for the final evaluation in the LSMDC (Sect. 6), they focus their discussion on METEOR and CIDEr scores in the preliminary evaluations in this section.
- According to Elliott and Keller (2013) and Vedantam et al. (2015), METEOR/CIDEr supersede previously used measures in terms of agreement with human judgments.
5.3.2 Human Evaluation
- The AMT workers are given randomized sentences, and, in addition to some general instruction, the following definitions: Grammar “Rank grammatical correctness of sentences: Judge the fluency and readability of the sentence (independently of the correctness with respect to the video).”.
- For which sentence is the content more correct with respect to the video (independent if it is complete, i.e. describes everything), independent of the grammatical correctness.”.
- In the LSMDC evaluation the authors introduce a new measure, which should capture how useful a description would be for blind people: “Rank the sentences according to how useful they would be for a blind person which would like to understand/follow the movie without seeing it.”.
5.4 Movie Description Evaluation
- As the collected text data comes from the movie context, it contains a lot of information specific to the plot, such as names of the characters.
- The authors pre-process each sentence in the corpus, transforming the names to “Someone” or “people” (in case of plural).
- The authors first analyze the performance of the proposed approaches on the MPII-MD dataset, and then evaluate the best version on the M-VAD dataset.
- The other 83movies are used for training.
- OnM-VADthe authors use 10 movies for testing, 10 for validation and 72 for training.
5.5 Movie Description Analysis
- The performance on the movie description datasets (MPII-MD and M-VAD) remains rather low.
- SMT-Best, S2VT and VisualLabels, in order to understand where these methods succeed and where they fail.the authors.
- In the following the authors evaluate all three methods on the MPII-MD test set.
5.5.1 Difficulty Versus Performance
- As the first study the authors suggest to sort the test reference sentences by difficulty, where difficulty is defined in multiple ways21.
- Some of the intuitive sentence difficulty measures are its length and average frequency of its words.
- Fig. 8a) shows the performance of comparedmethods w.r.t. the sentence length.
- For the word frequency the correlation is even stronger, see Fig. 8b.
- Visual-Labels consistently outperforms the other two methods, most notable as the difficulty increases.
5.5.2 Semantic Analysis
- Next the authors analyze the test reference sentences w.r.t. verb semantics.
- The most frequent Topics, “motion” and “contact”, which are also visual (e.g. “turn”, “walk”, “sit”), are nevertheless quite challenging, which the authors attribute to their high diversity (see their entropy w.r.t. different verbs and their frequencies in Table 14).
- The authors look at 100 test reference sentences, where Visual-Labels obtains highest and lowest METEOR scores.
- Among the worst 100 sentences the authors observe more diversity: 12 contain no verb, 10mention unusualwords (specific to the movie), 24 have no subject, 29 have a non-human subject.
- Summary (a) The test reference sentences thatmention verbs like “look” get higher scores due to their high frequency in the dataset.
6 The Large Scale Movie Description Challenge
- The Large Scale Movie Description Challenge was held twice, first in conjunction with ICCV 2015 ( 15) and then at ECCV 2016 ( 16).
- In the second phase of the challenge the participantswere providedwith the videos from the blind test set (without textual descriptions).
- To measure performance of the competing approaches the authors performed both automatic and human evaluation.
- The submission formatwas similar to the MS COCO Challenge (Chen et al. 2015) and the authors also used the identical automatic evaluation protocol.
- In the following the authors review the participants and their results for both LSMDC 15 and LSMDC 16.
6.1 LSMDC Participants
- The authors received 4 submissions to LSMDC 15, including their Visual-Labels approach.
- The other submissions are S2VT (Venugopalan et al. 2015b), Temporal Attention (Yao et al. 2015) and Frame-Video-Concept Fusion (Shetty and Laaksonen 2015).
- As the blind test set is not changed between LSMDC 2015 to LSMDC 2016, the authors look at all the submitted results jointly.
- In the following the authors summarize the submissions based on the (sometimes very limited) information provided by the authors.
6.1.1 LSMDC 15 Submissions
- S2VT (Venugopalan et al. 2015b)Venugopalan et al. (2015b) propose S2VT, an encoder–decoder framework, where a single LSTM encodes the input video, frame by frame, and decodes it into a sentence.
- The authors note that the results to LSMDC were obtained with a different set of hyper-parameters then the results discussed in the previous section.
- METEOR on the validation set, which resulted in significantly longer but also nosier sentences.
- Frame-Video-Concept Fusion (Shetty and Laaksonen 2015) Shetty and Laaksonen (2015) evaluate diverse visual features as input for an LSTM generation frame-work.
- Specifically theyusedense trajectory features (Wanget al. 2013) extracted for the entire clip and VGG (Simonyan and Zisserman 2015) andGoogleNet (Szegedy et al. 2015) CNN features extracted at the center frame of each clip.
6.1.2 LSMDC 16 Submissions
- This submission retrieves a nearest neighbor from the training set, learning a unified space using Canonical Correlation Analysis (CCA) over textual and visual features.
- Aalto University (Shetty and Laaksonen 2016) Shetty and Laaksonen (2016) rely on an ensemble of four models which were trained on the MSR-VTT dataset (Xu et al. 2016) without additional training on the LSMDC dataset.
- The four models were trained with different combinations of keyframe based GoogleLeNet features and segment based dense trajectory and C3D features.
- This work relies on temporal and attribute attention.
- According to the authors, their VD-ivt model consists of three parallel channels: a basic video description channel, a sentence to sentence channel for language learning, and a channel to fuse visual and textual information.
6.2.1 Automatic Evaluation
- The authors first look at the results of the automatic evaluation on the blind test set of LSMDC in Table 15.
- One reason for lower scores for Frame-Video-Concept Fusion and Temporal Attention appears to be the generated sentence length, which is much smaller compared to the reference sentences, as the authors discuss below (see also Table 16).
- It takes a second place w.r.t. the CIDEr score, while not achieving particularly high scores in other measures.
- In terms of vocabulary size all approaches fall far below the reference descriptions.
- Looking at the LSMDC 16 submissions, we, not surprisingly, see that Tel Aviv University retrieval approach achieves highest diversity among all approaches.
6.2.2 Human Evaluation
- The authors performed separate human evaluations for LSMDC 15 and LSMDC 16.
- LSMDC 15 The results of the human evaluation are shown in Table 17.
- As the authors have to compare more approaches the ranking becomes unfeasible.
- This leads us to the following evaluation protocol which is inspired by the human evaluation metric “M1” in the MS COCO Challenge (Chen et al. 2015).
- Additionally the authors measure the correlation between the automatic and human evaluation in Fig. 10.
6.3 LSMDC Qualitative Results
- Figure 11 shows qualitative results from the competing approaches submitted to LSMDC 15.
- The first two examples are success cases, where most of the approaches are able to describe the video correctly.
- The third example is an interesting case where visually relevant descriptions, provided by most approaches, do not match the reference description, which focuses on an action happening in the background of the scene (“Someone sets down his young daughter then moves to a small wooden table.”).
- The last two rows contain partial and complete failures.
- Tel Aviv University and Visual-Labels are able to capture important details, such as sipping a drink, which the other methods fail to recognize.
7 Conclusion
- A novel dataset of movies with aligned descriptions sourced from movie scripts and ADs (audio descriptions for the blind, also referred to as DVS).the authors.
- The authors approach, Visual-Labels, to automatic movie description trains visual classifiers and uses their scores as input to an LSTM.
- When ranking sentences with respect to the criteria “helpful for the blind”, their Visual-Labels is well received by human judges, likely because it includes important aspects provided by the strong visual labels.
- This time the authors introduced a new human evaluation protocol to allow comparison of a large number of approaches.
- Open access funding provided by Max Planck Society.
Did you find this useful? Give us your feedback
Citations
1,945 citations
687 citations
505 citations
469 citations
440 citations
References
73,978 citations
72,897 citations
49,914 citations
49,639 citations
40,257 citations
Related Papers (5)
Frequently Asked Questions (12)
Q2. What are the future works in "Movie description" ?
In the future work the movie description approaches should aim to achieve rich yet correct and fluent descriptions. Beyond their current challenge on single sentences, the dataset opens new possibilities to understand stories and plots acrossmultiple sentences in an open domain scenario on a large scale. Their evaluation server will continue to be available for automatic evaluation.
Q3. What are the frequent verbs in the dataset?
The most frequent verbs there are “look up” and “nod”, which are also frequent in the dataset and in the sentences produced by SMT-Best.
Q4. What is the main challenge in the construction of a video annotation dataset?
One of the main challenges in automating the construction of a video annotation dataset derived from AD audio is accurately segmenting the AD output, which is mixed with the original movie soundtrack.
Q5. What are the evaluation measures used for the semantic parsing pipeline?
The automatic evaluation measures include BLEU-1,-2,-3,-4 (Papineni et al. 2002), METEOR (Denkowski and Lavie 2014), ROUGE-L (Lin 2004), and CIDEr (Vedantam et al. 2015).
Q6. How do the authors decompose the sentences in a movie?
The authors start by decomposing the typically long sentences present in movie descriptions into smaller clauses using the ClausIE tool (Del Corro and Gemulla 2013).
Q7. How does the evaluation measure measure semantic content?
The authors also use the recently proposed evaluation measure SPICE (Anderson et al. 2016), which aims to compare the semantic content of two descriptions, by matching the information contained in dependency parse trees for both descriptions.
Q8. How do the authors add 2s to the end of each video clip?
In order to compen-sate for the potential 1–2s misalignment between the AD narrator speaking and the corresponding scene in the movie, the authors automatically add 2s to the end of each video clip.
Q9. What are the properties of a video description dataset?
The authors look at the following properties: availability of multi-sentence descriptions (long videos described continuously with multiple sentences), data domain, source of descriptions and dataset size.
Q10. What is the way to improve the visual representation of video?
Ballas et al. (2016) leverages multiple convolutional maps from different CNN layers to improve the visual representation for activity and video description.
Q11. What is the LSTM used to encode the video?
This submissionuses an encoder–decoder framework with 2 LSTMs, one LSTM used to encode the frame sequence of the video and another to decode it into a sentence.
Q12. What is the method used to align scripts to subtitles?
Then the authors use the dynamic programming method of Laptev et al. (2008) to align scripts to subtitles and infer the time-stamps for the description sentences.