scispace - formally typeset
Search or ask a question

Showing papers on "Closed captioning published in 2007"


Patent
12 Sep 2007
TL;DR: In this paper, an architecture for translating closed captioning text originally provided with a video program from one language to another and presenting the translated closed captioned text with the video program to a viewer is presented.
Abstract: The present invention provides an architecture for translating closed captioning text originally provided with a video program from one language to another and presenting the translated closed captioning text with the video program to a viewer. As such, the viewers are able to receive the closed captioning text in languages other than that used for the closed captioning originally provided with the video program. The original closed captioning text may be translated from one language to another by a centralized closed captioning processor, such that the customer equipment for various subscribers can take advantage of centralized translation services. Once the original closed captioning text is translated, the translated closed captioning text may be delivered to the customer equipment in different ways.

56 citations


Proceedings ArticleDOI
22 Apr 2007
TL;DR: A method is proposed that uses data mining to discover temporal patterns in video, and pair these patterns with associated closed captioning text that is used to train a situated model of meaning that significantly improves video retrieval performance.
Abstract: Situated models of meaning ground words in the non-linguistic context, or situation, to which they refer. Applying such models to sports video retrieval requires learning appropriate representations for complex events. We propose a method that uses data mining to discover temporal patterns in video, and pair these patterns with associated closed captioning text. This paired corpus is used to train a situated model of meaning that significantly improves video retrieval performance.

40 citations


Journal ArticleDOI
01 Apr 2007
TL;DR: It is found that hard of hearing viewers were significantly more positive about this style of captioning than deaf viewers and that some viewers believed that these augmentations were useful and enhanced their viewing experience.
Abstract: Television and film have become important equalization mechanisms for the dissemination and distribution of cultural materials. Closed captioning has allowed people who are deaf and hard of hearing to be included as audience members. However, some of the audio information such as music, sound effects, and speech prosody are not generally provided for in captioning. To include some of this information in closed captions, we generated graphical representations of the emotive information that is normally represented with nondialog sound. Eleven deaf and hard of hearing viewers watched two different video clips containing static and dynamic enhanced captions and compared them with conventional closed captions of the same clips. These viewers then provided verbal and written feedback regarding positive and negative aspects of the various captions. We found that hard of hearing viewers were significantly more positive about this style of captioning than deaf viewers and that some viewers believed that these augmentations were useful and enhanced their viewing experience.

39 citations


Patent
Regis J. Crinon1
24 May 2007
TL;DR: In this paper, the authors provide systems and/or methods that facilitate yielding closed caption service associated with real-time communication, where audio data and video data can be obtained from an active speaker in a real time teleconference.
Abstract: The claimed subject matter provides systems and/or methods that facilitate yielding closed caption service associated with real time communication. For example, audio data and video data can be obtained from an active speaker in a real time teleconference. Moreover, the audio data can be converted into a set of characters (e.g., text data) that can be transmitted to other participants of the real time teleconference. Additionally, the real time teleconference can be a peer to peer conference (e.g., where a sending endpoint communicates with a receiving endpoint) and/or a multi-party conference (e.g., where an audio/video multi-point control unit (AVMCU) routes data such as the audio data, the video data, and the text data between endpoints).

39 citations


Journal ArticleDOI
TL;DR: Neither near-verbatim captioning nor edited captioning was found to be better at facilitating comprehension; however, several issues emerged that provide specific directions for future research on edited captions.
Abstract: The study assessed the effects of near-verbatim captioning versus edited captioning on a comprehension task performed by 15 children, ages 7-11 years, who were deaf or hard of hearing. The children's animated television series Arthur was chosen as the content for the study. The researchers began the data collection procedure by asking participants to watch videotapes of the program. Researchers signed or spoke (or signed and spoke) 12 comprehension questions from a script to each participant. The sessions were videotaped, and a checklist was used to ensure consistency of the question-asking procedure across participants and sessions. Responses were coded as correct or incorrect, and the dependent variable was reported as the number of correct answers. Neither near-verbatim captioning nor edited captioning was found to be better at facilitating comprehension; however, several issues emerged that provide specific directions for future research on edited captions.

36 citations


Proceedings ArticleDOI
15 Apr 2007
TL;DR: Finite-state decoding graphs integrate the decision trees, pronunciation model and language model for speech recognition into a unified representation of the search space and are particularly applicable to low-latency and low-resource applications such as real-time closed captioning of broadcast news and interactive speech-to-speech translation.
Abstract: Finite-state decoding graphs integrate the decision trees, pronunciation model and language model for speech recognition into a unified representation of the search space. We explore discriminative training of the transition weights in the decoding graph in the context of large vocabulary speech recognition. In preliminary experiments on the RT-03 English Broadcast News evaluation set, the word error rate was reduced by about 5.7% relative, from 23.0% to 21.7%. We discuss how this method is particularly applicable to low-latency and low-resource applications such as real-time closed captioning of broadcast news and interactive speech-to-speech translation.

34 citations


Patent
02 Jan 2007
TL;DR: In this paper, a user-searchable captioning index comprising the captioning and synchronization data indicative of synchronization between the video stream and captioning is generated, in illustrative examples, the synchronization is time-based, video-frame-based or marker-based.
Abstract: Video navigation is provided where a video stream encoded with captioning is received (105). A user-searchable captioning index comprising the captioning and synchronization data indicative of synchronization between the video stream and the captioning is generated (117). In illustrative examples, the synchronization is time-based, video-frame-based, or marker-based.

34 citations


Proceedings ArticleDOI
24 Sep 2007
TL;DR: Experimental results indicate that using a grounded language model nearly doubles performance on a held out test set, extending a traditional language model based approach to information retrieval.
Abstract: This paper presents a methodology for automatically indexing a large corpus of broadcast baseball games using an unsupervised content-based approach. The method relies on the learning of a grounded language model which maps query terms to the non-linguistic context to which they refer. Grounded language models are learned from a large, unlabeled corpus of video events. Events are represented using a codebook of automatically discovered temporal patterns of low level features extracted from the raw video. These patterns are associated with words extracted from the closed captioning text using a generalization of Latent Dirichlet Allocation. We evaluate the benefit of the grounded language model by extending a traditional language model based approach to information retrieval. Experimental results indicate that using a grounded language model nearly doubles performance on a held out test set.

32 citations


Patent
21 Feb 2007
TL;DR: In this paper, the authors present an intelligent automated system that enables media outlets to optimize the value of their advertising inventory by text mining programming content in context and interpreting the accompanying audio tracks, in text form, from a closed captioning system or from a real-time voice recognition system.
Abstract: The present invention creates an intelligent automated system that enables media outlets to optimize the value of their advertising inventory. It also enables media outlets, on a platform-agnostic basis, to market advertising inventory driven by content-based criteria rather than audience data alone. This is achieved preferably by text mining programming content in context and by interpreting the accompanying audio tracks, in text form, from a closed captioning system or from a real time voice recognition system or from any other source of video and/or program content. The present invention searches through opportunities for an advertiser, or advertising category, on any number of media outlets. The application of in context text mining to advertisement unit placement allows the advertiser to reach more viewers who are engaged and predisposed to receiving the advertiser's message.

26 citations


Book
17 Sep 2007
TL;DR: This chapter discusses Digital Television Channel Coding and Modulation, Closeding, Subtitling, and Teletext, and the MPEG-2 Video Compression Standard.
Abstract: Preface. 1. Introduction to Analog and Digital Television. 2. Characteristics of Video Material. 3. Predictive Encoding. 4. Transform Coding. 5. Video Coder Syntax. 6. The MPEG-2 Video Compression Standard. 7. Perceptual Audio Coding. 8. Frequency Analysis and Synthesis. 9. MPEG Audio. 10. Dolby AC-3 Audio. 11. MPEG-2 Systems. 12. DVB Service Information and ATSC Program and System Information Protocol. 13. Digital Television Channel Coding and Modulation. 14. Closed Captioning, Subtitling, and Teletext. Appendix: MPEG Tables. Index.

22 citations


Book ChapterDOI
22 Jul 2007
TL;DR: This paper describes the development of a system that can provide an automatic text transcription of multiple speakers using speech recognition (SR), with the names of speakers identified in the transcription and corrections of SR errors made in real-time by a human 'editor'.
Abstract: Text transcriptions of the spoken word can benefit deaf people and also anyone who needs to review what has been said (e.g. at lectures, presentations, meetings etc.) Real time captioning (i.e. creating a live verbatim transcript of what is being spoken) using phonetic keyboards can provide an accurate live transcription for deaf people but is often not available because of the cost and shortage of highly skilled and trained stenographers. This paper describes the development of a system that can provide an automatic text transcription of multiple speakers using speech recognition (SR), with the names of speakers identified in the transcription and corrections of SR errors made in real-time by a human 'editor'.

Patent
31 May 2007
TL;DR: A text-to-sign language translation platform (translation platform) as mentioned in this paper enables efficient, real-time processing of written speech elements, such as may be supplied by a Closed Captioning feed, and conversion to sign language video clips that may be output to a video display.
Abstract: The present disclosure details apparatuses, methods, and systems for a text-to-sign language translation platform (“translation platform”). The translation platform enables efficient, real-time processing of written speech elements, such as may be supplied by a Closed Captioning feed, and conversion to sign language video clips that may be output to a video display, such as via an embedded “picture-in-picture” window. The translation platform is configurable to process homographs, synonyms, grammatical context, multiple speakers, tone of voice, and/or the like.

Proceedings ArticleDOI
09 Jul 2007
TL;DR: Two automated methods for producing TV program trailers (short video clips to advertise the program) are proposed, based on the sentence similarity between the closed caption and the introductory text of the target program.
Abstract: This paper proposes two automated methods for producing TV program trailers (short video clips to advertise the program). Program trailers are useful as the representative video of a content retrieval system that operates in a large archive of program videos. The two methods employ introductory descriptions from electronic program guides. The first method is based on the sentence similarity between the closed caption and the introductory text of the target program. We extract closed caption sentences that have the highest similarity for each introductory sentence, and then connect the corresponding video segments to make the representative video. A Bayesian belief network is used to calculate the similarity. The second method extracts several sentences that have the same textual features as those of a general introductory text, and determines the corresponding video sections. The features are learned by using the AdaBoost algorithm. These methods were used to generate trailers for actual TV programs, by which their effectiveness was verified.

01 Jan 2007
TL;DR: In this paper, the advantages and drawbacks of using subtitling and/or sign language interpreting on television while trying to establish why both are much loved or much hated accessibility solutions.
Abstract: It is no longer questionable whether d/Deaf and hard- of-hearing viewers should be offered accessibility services on television. This matter has been widely discussed at a European level and most countries have taken legislative action, while televi- sion broadcasters have implemented different solutions - mainly closed captioning/teletext subtitling and sign language interpret- ing - to make their programmes accessible to people with hearing impairment. It is common to find d/Deaf and hard-of-hearing viewers complaining about what they are offered on television. It is also common to hear that television providers are doing their best to make their services available to all. There is still another group of voices turning down or singing the praise of one or the other solution, for a number of reasons which range from technical and aesthetic issues to political and social motivation. This paper examines the advantages and drawbacks of using subtitling and/or sign language interpreting on television while trying to establish why both are much loved or much hated accessibility solutions.

Journal ArticleDOI
20 Mar 2007-Info
TL;DR: In this paper, the authors explore the historical construction of the US broadcast television closed-captioning system as a case study of debates over public service broadcasting during the late twentieth century.
Abstract: Purpose – To explore the historical construction of the US broadcast television closed‐captioning system as a case study of debates over “public service broadcasting” during the late twentieth century.Design/methodology/approach – Historical.Findings – Neither the corporate voluntarism promoted by the FCC in the 1970s nor the “public‐private partnership” of the National Captioning Institute (NCI) in the 1980s proved able to sustain a closed‐captioning system; instead, a progressive round of re‐regulation on both the demand side (universal decoder distribution) and the supply side (mandatory program captioning) was necessary to bring the promise of broadcast equality to all deaf and hard‐of‐hearing (D/HOH) citizens.Originality/value of paper – The decades‐long legal, technological, and institutional battle to define the “public interest” responsibilities of broadcasters toward non‐hearing viewers was fraught with contradiction and compromise.

Journal ArticleDOI
TL;DR: With the new speech detection and the gender identification, the proposed dual-gender speech recognition significantly reduced the word error rate by 11.2% relative to a conventional gender-independent system, while keeping the computational cost feasible for real-time operation.
Abstract: This paper describes a new method to detect speech segments online with identifying gender attributes for efficient dual gender-dependent speech recognition and broadcast news captioning. The proposed online speech detection performs dual-gender phoneme recognition and detects a start-point and an end-point based on the ratio between the cumulative phoneme likelihood and the cumulative non-speech likelihood with a very small delay from the audio input. Obtaining the speech segments, the phoneme recognizer also identifies gender attributes with high discrimination in order to guide the subsequent dual-gender continuous speech recognizer efficiently. As soon as the start-point is detected, the continuous speech recognizer with paralleled gender-dependent acoustic models starts a search and allows search transitions between male and female in a speech segment based on the gender attributes. Speech recognition experiments on conversational commentaries and field reporting from Japanese broadcast news showed that the proposed speech detection method was effective in reducing the false rejection rate from 4.6% to 0.53% and also recognition errors in comparison with a conventional method using adaptive energy thresholds. It was also effective in identifying the gender attributes, whose correct rate was 99.7% of words. With the new speech detection and the gender identification, the proposed dual-gender speech recognition significantly reduced the word error rate by 11.2% relative to a conventional gender-independent system, while keeping the computational cost feasible for real-time operation.

Patent
05 Oct 2007
TL;DR: In this article, a closed captioning receiver receives a first plurality of consecutive closed captions from the audiovisual content stream substantially at a playback position, and then a comparator identifies the audio content stream by matching the first plurality to a database having a second plurality.
Abstract: Apparatus for identifying an audiovisual content stream received from an audiovisual content source includes an audiovisual content display that displays the audiovisual content stream to a user. A closed captioning receiver receives a first plurality of consecutive closed captions from the audiovisual content stream substantially at a playback position. A comparator identifies the audiovisual content stream by matching the first plurality of closed captions to a database having a second plurality of closed captions. The database includes an identification of audiovisual content for each of the second plurality of closed captions. A presentation unit presents additional information related to the identification of the audiovisual content to the user. The database may include a playback location in the audiovisual content for each of the second plurality of closed captions, the comparator may estimate the playback position, and additional information related to the estimated playback position may be presented to the user.

Proceedings ArticleDOI
TL;DR: This approach is implemented in a computer-assisted captioning software which uses a face detector and a motion detection algorithm based on the Lukas-Kanade optical flow algorithm to provide alternatives for conflicting caption positioning.
Abstract: Deaf and hearing-impaired people capture information in video through visual content and captions. Those activities require different visual attention strategies and up to now, little is known on how caption readers balance these two visual attention demands. Understanding these strategies could suggest more efficient ways of producing captions. Eye tracking and attention overload detections are used to study these strategies. Eye tracking is monitored using a pupilcenter- corneal-reflection apparatus. Afterward, gaze fixation is analyzed for each region of interest such as caption area, high motion areas and faces location. This data is also used to identify the scanpaths. The collected data is used to establish specifications for caption adaptation approach based on the location of visual action and presence of character faces. This approach is implemented in a computer-assisted captioning software which uses a face detector and a motion detection algorithm based on the Lukas-Kanade optical flow algorithm. The different scanpaths obtained among the subjects provide us with alternatives for conflicting caption positioning. This implementation is now undergoing a user evaluation with hearing impaired participants to validate the efficiency of our approach.

Patent
21 Jun 2007
TL;DR: In this article, a user interface is output that is configured to accept preferences for a plurality of closed captions when the first closed caption is not available via a particular channel, based on the preferences.
Abstract: Techniques are described to provide closed captioning preferences. In an implementation, a user interface is output that is configured to accept preferences for a plurality of closed captions. A first one of the closed captions is output, based on the preferences, when available via a particular channel. A second one of the closed captions is output, based on the preferences, when the first closed caption is not available via the particular channel.

Journal ArticleDOI
TL;DR: The development, testing and evaluation of a system that enables editors to correct errors in the captions as they are created by automatic speech recognition are described and suggestions for future possible improvements are made.
Abstract: Lectures can be digitally recorded and replayed to provide multimedia revision material for students who attended the class and a substitute learning experience for students unable to attend. Deaf and hard of hearing people can find it difficult to follow speech through hearing alone or to take notes while they are lip-reading or watching a sign-language interpreter. Synchronising the speech with text captions can ensure deaf students are not disadvantaged and assist all learners to search for relevant specific parts of the multimedia recording by means of the synchronised text. Automatic speech recognition has been used to provide real-time captioning directly from lecturers’ speech in classrooms but it has proved difficult to obtain accuracy comparable to stenography. This paper describes the development, testing and evaluation of a system that enables editors to correct errors in the captions as they are created by automatic speech recognition and makes suggestions for future possible improvements.

Journal ArticleDOI
01 May 2007
TL;DR: The proposed modeling and estimation methods for the mixture language model (LM) led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy.
Abstract: We are developing an automatic captioning system for teleconsultation video teleconferencing (TC-VTC) in telemedicine, based on large vocabulary conversational speech recognition. In TC-VTC, doctors' speech contains a large number of infrequently used medical terms in spontaneous styles. Due to insufficiency of data, we adopted mixture language modeling, with models trained from several datasets of medical and nonmedical domains. This paper proposes novel modeling and estimation methods for the mixture language model (LM). Component LMs are trained from individual datasets, with class n-gram LMs trained from in-domain datasets and word n-gram LMs trained from out-of-domain datasets, and they are interpolated into a mixture LM. For class LMs, semantic categories are used for class definition on medical terms, names, and digits. The interpolation weights of a mixture LM are estimated by a greedy algorithm of forward weight adjustment (FWA). The proposed mixing of in-domain class LMs and out-of-domain word LMs, the semantic definitions of word classes, as well as the weight-estimation algorithm of FWA are effective on the TC-VTC task. As compared with using mixtures of word LMs with weights estimated by the conventional expectation-maximization algorithm, the proposed methods led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy

Patent
29 Jan 2007
TL;DR: In this paper, a system and method for enabling access to closed captioning data present in a broadcast stream is disclosed, which includes accessing device data associated with a broadcast-stream receiver, wherein the device data indicates whether the broadcast stream receiver is configured to receive a digitized format of closed-coding data or an analog format.
Abstract: A system and method for enabling access to closed captioning data present in a broadcast stream is disclosed. The technology includes a method for enabling access to closed captioning data present in a broadcast stream. The method includes accessing device data associated with a broadcast stream receiver, wherein the device data indicates whether the broadcast stream receiver is configured to receive a digitized format of closed captioning data or an analog format of closed captioning data. Provided the digitized format of the closed captioning data is not present in the broadcast stream, the method includes ensuring the broadcast stream receiver is configured to access the analog format of the closed captioning data.

Book ChapterDOI
01 Jan 2007
TL;DR: A graphbased method, MAGIC, is proposed, which represents multimedia data as a graph and can find crossmodal correlations using “random walks with restarts” and achieves a relative improvement of 58% in captioning accuracy as compared to recent machine learning techniques.
Abstract: Multimedia objects like video clips or captioned images contain data of various modalities such as image, audio, and transcript text. Correlations across different modalities provide information about the multimedia content, and are useful in applications ranging from summarization to semantic captioning. We propose a graphbased method, MAGIC, which represents multimedia data as a graph and can find crossmodal correlations using “random walks with restarts”. MAGIC has several desirable properties: (a) it is general and domain-independent; (b) it can detect correlations across any two modalities; (c) it is insensitive to parameter settings; (d) it scales up well for large datasets, (e) it enables novel multimedia applications (e.g., group captioning), and (f) it creates opportunity for applying graph algorithms to multimedia problems. When applied to automatic image captioning, MAGIC finds correlations between text and image and achieves a relative improvement of 58% in captioning accuracy as compared to recent machine learning techniques.

Journal Article
TL;DR: Closed-captioned videotext with high audio/video correlation allows the learner to see, hear, and contextualize words and sentences simultaneously.
Abstract: Traditional ESL instruction accepts the idea that a student’s ability to visualize text and to create mental pictures of letters and whole words is important in comprehension. Closed-captioned videotext with high audio/video correlation allows the learner to see, hear, and contextualize words and sentences simultaneously.

Journal ArticleDOI
TL;DR: The Authoring with Video (AWV) approach as discussed by the authors encourages teachers to use captioning software and digital video in writing assignments to motivate their students to engage in reading and writing.
Abstract: Teachers are hungry for strategies that will motivate their students to engage in reading and writing. One promising method is the Authoring With Video (AWV) approach, which encourages teachers to use captioning software and digital video in writing assignments. AWV builds on students' fascination with television and video but removes the audio and requires students to write a narration for the video. The process offers students a text-focused method of interacting with video that helps them craft their writing. AWV increases students' motivation because the final results look professional and can be shown to a variety of audiences. Step-by-step instructions for using AWV as well as links to digital media and suggested teaching ideas are provided.

Proceedings ArticleDOI
25 Jun 2007
TL;DR: Results of preliminary experiments on correction rate and actual user performance are reported using a prototype correction module connected to the output of a speech recognition captioning system.
Abstract: Live closed-captions for deaf and hard of hearing audiences are currently produced by stenographers, or by voice writers using speech recognition. Both techniques can produce captions with errors. We are currently developing a correction module that allows a user to intercept the real-time caption stream and correct it before it is broadcast. We report results of preliminary experiments on correction rate and actual user performance using a prototype correction module connected to the output of a speech recognition captioning system.

Patent
07 Jun 2007
TL;DR: In this paper, the authors propose an exemplary method for buffering closed captioning data, which consists of determining whether a current closed-captioning data buffer is available for the field, saving the closed-coding data to the next closed caption data buffer, checking whether the closed captions can be ignored, and dropping the closed captioned data if there is no room for the additional closed captionation data buffer.
Abstract: An exemplary method relates to buffering closed captioning data. The exemplary method comprises receiving closed captioning information comprising closed captioning data and a field, determining whether a current closed captioning data buffer is available for the field, saving the closed captioning data to the current closed captioning data buffer if the current closed captioning data buffer is available for the field, determining whether a next closed captioning data buffer is available for the field if the current closed captioning data buffer is not available, saving the closed captioning data to the next closed captioning data buffer if the next closed captioning data buffer is available for the field, checking whether the closed captioning data can be ignored if the next closed captioning data is not available for the field, dropping the closed captioning data if the closed captioning data can be ignored, checking whether there is room for an additional closed captioning data buffer if the closed captioning data cannot be ignored, saving the closed captioning data to the additional closed captioning data buffer if the there is room for the additional closed captioning data buffer, and dropping the closed captioning data if there is no room for the additional closed captioning data buffer.

Mike Wald, P Boulain, J Bell, K Doody, J Gerrard 
01 Jan 2007
TL;DR: The development, testing and evaluation of a system that enables editors to correct errors in the captions as they are created by Automatic Speech Recognition are described and suggestions for future possible improvements are made.
Abstract: Lectures can be digitally recorded and replayed to provide multimedia revision material for students who attended the class and a substitute learning experience for students unable to attend. Deaf and hard of hearing people can find it difficult to follow speech through hearing alone or to take notes while they are lip-reading or watching a sign-language interpreter. Synchronising the speech with text captions can ensure deaf students are not disadvantaged and assist all learners to search for relevant specific parts of the multimedia recording by means of the synchronised text. Automatic Speech Recognition has been used to provide real-time captioning directly from lecturers’ speech in classrooms but it has proved difficult to obtain accuracy comparable to stenography. This paper describes the development, testing and evaluation of a system that enables editors to correct errors in the captions as they are created by Automatic Speech Recognition and makes suggestions for future possible improvements.

Patent
16 May 2007
TL;DR: In this paper, a method, apparatus, system, and signal-bearing medium that, in an embodiment, create an alternative audio file with alternative audio segments and embed markers in the alternative audio files is presented.
Abstract: A method, apparatus, system, and signal-bearing medium that, in an embodiment, create an alternative audio file with alternative audio segments and embed markers in the alternative audio file. Each of the markers is associated with a respective alternative audio segment, and the markers identify original closed caption data segments in a program. The alternative audio file is sent to a client. The client receives the program from a content provider, matches the markers to the original closed caption data segments, and substitutes the alternative audio segments for the original audio segments via the matches during presentation of the program.

Patent
21 Feb 2007
TL;DR: In this paper, a television may be equipped with a closed captioning (CC) buffer for storing CC data received from cable, satellite or terrestrial broadcasts, and a user may operate a remote control to move forward and reverse through the CC buffer to display CC data that the user may have a desire to review.
Abstract: A television may be equipped with a closed captioning (CC) buffer for storing CC data received from, for example, cable, satellite or terrestrial broadcasts. A user may operate a remote control to move forward and reverse through the CC buffer to display CC data that the user may have a desire to review.